Posts

Test Automation for Agile Environments

Amit NimjeMay 30, 2016

By design, development projects using the Agile methodology move quickly toward frequent, new product releases. In such environments, timely quality assurance (QA) feedback is a must. To be useful, the test results must also be consistent and reliable. This combination of requirements – speed, quality, and reliability – highlights the usefulness of automating testing in Agile environments.

 

Historically, QA during Agile-driven projects faced several challenges. One challenge is the sheer number of components that must be tested on short timelines. In the case of two or three simple dashboards, testing can be performed manually. But as project components multiply and complexity increases, short timelines make manual testing unfeasible, especially for features such as customized filter panels, tool tips, and calendar filters.

 

Another challenge is achieving 100% functional and data testing for each release. Aggressive timelines make frequent, often repetitive and redundant scenario testing difficult to complete on time.

 

Multilingual products present unique challenges to Agile teams. Suppose a dashboard supports nine different languages. It can be difficult to test nine versions of the same dashboard, each in a different language, for every new release. In these situations, testers usually focus on new features and the user interface (UI). Testing the language and data in each dashboard’s reports usually falls by the wayside.

 

To overcome these challenges, we developed a test automation utility using Selenium. We also employed Perl scripting to integrate the utility with database and web applications. Fig. 1 illustrates the framework for the utility.

 

test_auto

Fig. 1 Framework for Automated Testing Utility

 

As illustrated, the framework uses four data input sheets (i.e., the gray boxes inside the blue box), which define the testing flow for an application. The top layers contain data that are input into a Microsoft Excel spreadsheet. The spreadsheet contains four tabs, one each for user credentials, language selector, testing steps, and multilingual correlation testing (e.g., across print and PDF versions of text).

 

Once the input sheets are ready, the driver script is triggered by a batch file, either manually or scheduled, which in turn calls the required functions from the function library. The function library takes the inputs from the spreadsheet and returns output defined by the driver script. Finally, the driver script generates the output folder for individual execution with date and timestamp, which consists of “Pass or Fail” status for each of the four data inputs. The output folder also includes screenshots.

 

This utility supports different areas of QA, such as:

  • Data testing – Testers must verify data in the application (e.g., dashboards, grid reports, etc.) against source databases. Random sampling is often used for large data sets, but the utility can test a complete data set just as efficiently as humans can manually test random samples.
  • Functional testing – Client requirements are sometimes vague, and at times not supported by an out-of-the-box feature. So customization is common in meeting client requirements. But customization increases the functional complexity of the application. The utility performs complex and repetitive functional testing tasks.
  • Multilingual testing – Language fluency is prerequisite for manually testing dashboards in different languages. The utility eliminates this requirement by automating tests in all languages.
  • Correlation testing – Along with web-based dashboards, print and PDF versions of the dashboards require testing. The QA team can use the utility to verify textual consistency across web, print, and PDF versions of dashboards.
  • UI testing – The utility performs partial UI testing on dashboards, providing screenshots at any point during testing for QA and test documentation.

The utility provides numerous advantages, including:

  • Zero cost – Selenium is open source and can be downloaded for free at the Selenium
  • Flexible use – The utility is environment independent and can be run for applications other than QA, using the same input spreadsheets.
  • Time savings – The utility replaces the need for much manual testing, including redundant scenario tests on different languages and across data sets. The script can be scheduled to run overnight.
  • Scalability – The global function library supports many use cases on multiple types of projects. Most adjustments for different uses require minor modifications to the input spreadsheets.

If you work in an Agile environment and are searching for better ways to implement effective QA, contact us. We will be happy to demo this utility or to discuss developing another solution for your requirements.

How to Ensure a Successful Data Migration: Assessment and Automation Strategies

Rupesh KatkamwarApril 5, 2016

With a number of emerging big data platforms now out on the market, many companies are looking to migrate their data from their current relational database management platform to a big data storage structure. However, large data migration projects are inherently complex.

While there are many reasons data migration projects run into trouble, our own experience has taught us that defining an effective and efficient testing strategy is critical to a project’s overall success. In this two-part post, we examine what a solid planning and testing strategy should include to ensure success and to avoid the most frequent failures that occur during data migration.

In this post, we focus on preparing for data migration through proper planning:

Conducting a Platform Difference Assessment
The core aspect of a data migration project is the accurate transfer of a source database to a target database. To ensure that migrated data is accurately represented in the target system, it is necessary to first assess where there may be discrepancies in the values/records of each column/table in the source and target databases. This analysis should include identifying any differences in the types of data types used, time and date formats, and differences between displaying control characters. It should also assess if there are issues with NULL, BLANK, and SPACE usage between the source and target database.

Identified differences should be documented along with the appropriate procedures for addressing these inconsistencies and approved by all key stakeholders before proceeding. Additionally, conducting an assessment is important for identifying the exact scope of the migration. The assessment should focus on both historical data migration and incremental data migration using some ETL jobs.

The assessment phase should also include an evaluation of what testing tool is most appropriate, including assessing a tool’s compatibility with both the source and target database and how the tool will be integrated. In a large database migration, you need a tool that is robust enough and has a large enough database capacity to handle the database comparisons. Many tools may work fine for small tables, but when comparing extremely large tables in one go, less robust tools can take a tremendous amount of time to analyze the data or can get hung up and stop working.

Use Automation Utilities
One of the biggest challenges in any data migration process, and particularly in a large data migration project, is validating the source data against the target data. Because a data migration will involve thousands of tables and hundreds of gigabytes (GB) of data, manual validation is highly inefficient and in many cases not realistic.

While sample testing and manual verification is possible, it still leaves large amounts of the new database untested. Therefore, automating the testing process is the best solution. To automate the process, test scripts are written that compare and validate the data present in both the legacy and new database. While there are several tools on the market than can help with the automation process, QuerySurge is one tool we have found to be highly successful in automating the validation process.

QuerySurge is a collaborative data testing solution built specifically to automate the testing of big data and data warehouses while providing a holistic view of the data’s health. QuerySurge not only ensures that the data extracted from sources remains intact in the target system by analyzing and pinpointing up to 100 percent of all data differences quickly, but also provides real-time and historical view of the data’s health over time.

Whatever method you determine is best however, automation is critical to for running validation tests on large swaths of data. Ultimately, automation ensures less errors throughout the validation testing process than manual small batch data testing.

To learn more about the validation types that are necessary and how to perform these tests through automated processes, stay tuned for our next post on data migration.

Boost Efficiency by Automating Your Daily MicroStrategy Change Report

Gaurav DholwaniDecember 2, 2015

If you’ve ever worked on a large MicroStrategy project with multiple developers simultaneously, you know there’s a high probability that at some point, modifications made by one developer in the production environment can negatively affect the entire project.

The standard way to deal with this situation is by running daily MicroStrategy change reports that allow you to see changes made each day to the production environment. Unfortunately, it often takes significant time to identify the root cause of the issue, and even more time to hunt down the developer who made the modification so he or she can fix the error.

At InfoCepts, our team experienced this situation multiple times and knew there had to be a better, faster way — not only for ourselves but also for our clients. Fortunately, we found one.

From Manual to Automated Change Reports

To automate daily change reports in MicroStrategy, we created a freeform SQL report to provide details about all objects created or modified in the production environment throughout the previous day. How does the report work? Essentially, it uses metadata and Enterprise Manager tables to provide the required information in Oracle or even other database environments, if modified.

The one prerequisite, however, is that MicroStrategy administrators must have access to all MicroStrategy metadata and Enterprise Manager tables, given that the change report depends on these tables. Then, because the SQL is already created, administrators simply need to schedule the report to run daily in MicroStrategy. Once this is done, they’ll receive an email each day with the full report.

While a similar report can be created manually in MicroStrategy by the MicroStrategy administrator, that approach takes time and doesn’t always lead to results. Automating daily reports, however, comes with three distinct advantages, allowing users to:

  • Save time in report creation.
    When using our automated report solution, MicroStrategy administrators don’t have to modify the SQL. This saves time and makes the process much simpler — in fact, the administrator doesn’t have to understand the SQL or how MicroStrategy is storing all of the metadata to create the report.
  • Gain instant visibility.
    With the standard daily change report, it’s impossible to know who modified what. So, even after the root cause is identified, administrators have to email the team of developers to find out who made the modification. Only then can the issue be fixed. With our automated change report, administrators receive details on which developer modified what, and can go directly to the source to fix the problem.
  • Get to the root cause faster.
    Our automated report makes it easy to identify the root cause, which is rarely the case with manually-created change reports. With the manual method, our own MicroStrategy administrators often struggled to find the root cause, but that doesn’t happen with our automated solution.

These benefits make a big difference in day-to-day operations, affecting everything from decision-making to efficiency and your bottom line. Consider, for example, a recent problem with one of our sales reports showing incorrect sales data. By using our automated daily change report, our team discovered quickly that a metric and formula used in the sales report was modified the day before. We didn’t have to spend time on root cause analysis or emailing the developers asking who had made the modification. Instead, the report provided the information we need — and saved our MicroStrategy administrator roughly three to four hours of effort.

If you’re looking for ways to streamline change reports and other database administration and warehousing tasks, our team at InfoCepts can help. Get in touch to learn how.

Automate Your Dynamic Mappings Rule Management System

Manish SengarNovember 17, 2015

Information is a critical business asset, but only when it’s extracted and utilized effectively. With the advent of the digital revolution and adoption of the Internet of Things (IoT), businesses face increasing challenges with information. By far, the biggest hurdles are:

  • Volume –The amount of data that can be effectively collected and analysed.
  • Velocity – The rate at which data should be ingested and processed.
  • Variety – Types of data that should be collected.
  • Veracity – Trustworthiness of the data.

In one way or another, all of these challenges relate to data quality, a significant concern right now that can potentially affect the success (or failure) of your company. As Ted Friedman and Michael Smith wrote in a Gartner report, “Poor data quality is a primary reason for 40 percent of all business initiatives failing to achieve their targeted benefits.” If your data quality suffers, you can expect even bigger challenges that affect the operational side of your business (meaning a significant correction effort is needed), as well as your company’s overall viability (if not detected and corrected in time, incorrect knowledge and potentially bad business decisions can ensue).

Where do data quality problems come from?

Data quality issues can arise at any stage or part of your business intelligence system. These include:

  • Data modelling — Incorrect representation of the business model.
  • Data sources — Issues with data provided by upstream source systems.
  • Data integration and profiling — Problems with data scenarios analyzed and the design of processes.

In addition, one of the most frequently encountered data quality issues arises at the ETL stage — specifically, when ETL implementation is not in sync with the current business definition of data mappings. This issue typically occurs when a large number of data mappings are changed often by a business, as indicated by their requirements (e.g., an inventory group mapping for digital advertisers or item group mapping for retailers). It  can occur in isolation with other data quality issues or even when no problems with source data and ETL processes exist.

Most often, data mappings are defined and maintained manually by businesses, and then passed on to IT to be implemented in the business intelligence system, using ad-hoc ETL processes. In many cases, organizations do not prioritize this process as much as needed, relying instead on manual maintenance. This, in turn, leads to these challenges:

  • A higher risk of human errors due to manual efforts.
  • A longer turnaround time.
  • The potential for inter-team conflict (e.g., between business and IT).
  • Lower overall confidence on the final BI content.

What can help?
At InfoCepts, our team came up with a solution known as Dynamic Mappings Rule Management (DMRM). Essentially, DMRM is a modular, customizable system that can be easily used by business users to create and maintain data mappings for data transformations, calculations, and clean up — with minimal IT intervention. Components include:

  • A user interface that allows users to:
    •  Create new mapping rules.
    •  Modify existing mapping rules.
    •  Delete existing mapping rules.
    •  Restore previous versions of mapping rules.

This can be implemented in a technology of choice, whether with Java-based web UI or MicroStrategy’s transaction report.

  • A DMRM schema that involves a set of backend tables to store user changes and help generate dynamic queries.
  • A DMRM engine, which serves as a core component of the solution. Currently, the engine is implemented in Informatica, basically as an automated workflow to analyze DMRM schema, generate dynamic SQL queries, and apply changes to target tables by executing the SQL queries against the data warehouse. It can also be implemented in an alternate ETL tool of choice.

Key highlights of DMRM
Our team integrated a number of features of make DMRM highly effective, such as:

  • A customizable design. Since the solution uses implementation agnostic schema design and dynamic SQL queries, it can be customized to meet the needs of a variety of scenarios. To date, we have implemented it successfully for media and retail clients, with minimal changes from one implementation to another.
  • A technology-independent modular design. Due to the solution’s modular design, minimal changes are needed for new implementations. For instance, the existing UI (or existing MicroStrategy set up) can be used, while tables in the schema design can be implemented in any database, using any modelling. In addition, the DMRM engine can be implemented in shell scripting or any other ETL tool.

How it’s used, what benefits it brings
DMRM is a particularly effective solution for organizations with a business model that requires a large number of frequently changing data mappings. Likewise, it can help companies that require faster access to data with modified mappings, and that currently use ad-hoc manual processes to handle data mapping changes.

With DMRM in place, companies can gain:

  • Significantly improved data quality due to zero manual intervention.
  • Faster turnaround time due to automated processing of requested mapping changes.
  • Better coordination between business and IT team, given that the ownership of data quality (accuracy and timeliness) is moved to business.

DMRM enables users to define their own rules and experiment with resulting mapping changes without going back to IT. It also eliminates the requirements of writing complicated Excel formulas and transformations to derive and merge data from multiple sources. Ultimately, this leads to increased business confidence in the information and analysis delivered in BI reports, along with increased end user satisfaction — a primary goal of any BI team.

Limitations
Due to the customizable and modular design, DMRM can be modified and implemented with minimal efforts. There are, however, a few considerations to keep in mind. These include:

  • A UI design that is specific to an organization’s requirements, which means the actual effort to implement can’t be estimated before requirements are available.
  • A shift in the ownership of data quality (accuracy and timeliness) to the business side of your company, which may require process changes to ensure a smooth adoption.
  • Additional awareness or training, given that your data quality is only as good as the changes made by business users.

Get in touch to learn more about DMRM — and to find out if it’s the right solution for you.

Why Automating Pass-Through Mapping Cuts Costs and Time

Chetan DhapudkarOctober 29, 2015

Historically, pass-through mapping has been a manual process. But for companies with a large volume of data in diverse formats, manual pass-through mapping is a time-consuming, costly process that is also prone to errors. Consider, for example, an international retail chain with hundreds of stores worldwide and point of sale data saved in multiple servers. The company needs to integrate the data and create a central repository, but with close to 200 tables of data in various formats, it would take a small army of developers to map the data manually.

Issues like these make the manual approach to mapping a challenge for many enterprise-level businesses. Yet the large majority (by my own estimate, 90 percent) rely on manual mapping, regardless of the volume of data and timeline of the project. By automating the process, however, your company can achieve higher quality data in significantly less time and for less cost.

How Pass-Through Mapping Automation Works
While data integration tools do not offer a simple method to automate pass-through mapping, automation is possible. It requires a deep understanding of the operating principles of the data integration tool, along with the ability to understand and replicate the XML code that the tool creates. Through code replication, one can develop a code generating utility within the tool.

How does this work? The code generation utility created through this process facilitates the synchronization of XML code and can generate the XML code for each table quickly — typically in one-tenth or less of the time it takes to do manually. Additionally, the code allows for a uniform style throughout the mapping process, eliminating human errors, such as forgetting to map a certain row or column.

Finally, because most of the leading data integration tools, such as Informatica, DataStage, and Talend, operate using metadata, the same automation process can be adopted and deployed across a large number of data integration tools and for a variety of data mapping processes (not just pass-through mapping).

Benefits of Automation
Automating the pass-through mapping process offers a number of benefits and eliminates many of the challenges of manual mapping. These include:

  • Time savings. Manual efforts, particularly for large volume or time-sensitive pass-through mapping projects, can require significant resources and manpower to complete in a timely manner. It simply may not be possible to “scale up” temporary resources for these types of projects. And even if it is, the management challenges of overseeing a large team of temporary developers may prove unruly.
    Done manually, a single mapping takes roughly four hours to create. With automation, however, the process takes around 5 minutes, meaning you could end up creating 100 mappings in a day, once the automation principle is in place.
  • Cost savings. The costs of large-scale mapping projects run high and may even be prohibitive. Say, for instance, you need to create 200 mappings in a short time period. Since each mapping requires four hours of a developer’s time, you would need to pay a team of developers for 800 hours to complete the project. On top of developer compensation, you would need to factor in recruiting, hiring, and management costs — all of which add up.
    Pre-made automation tools do exist, but they’re also expensive and still require a developer to run the process. Automation within the data integration tool itself, as described above, can therefore provide significant cost savings.
  • Better data. With manual mapping, if someone misses a column or imports a table and accidentally gives it the wrong name (e.g., T3495 instead of T394), significant consequences will ensue. Automation ensures uniform processes, which eliminates human error and produces higher quality data. Additionally, companies can avoid spending the time or resources required to find and fix human errors.

Automating pass-through mapping within the data integration tool isn’t easy and comes with plenty of challenges. It’s also a new endeavor that most developers are still struggling to figure out and successfully achieve. But for companies with large volume or time-sensitive pass-through mapping projects, it’s by far your best, most cost-efficient bet.

At InfoCepts, we’ve developed our own code to automate pass-through mapping — and could help you, too. Reach out to learn how.

Using MicroStrategy Transaction Services and Automation to Achieve a Simple Multi-tenant User Management Interface

Gaurav TrivediOctober 21, 2015

The multi-tenancy model (1) is emerging rapidly in the business intelligence industry, given that this common infrastructure is shared by multiple customers in the form of different MicroStrategy projects. This makes the model very cost-effective in terms of infrastructure management, but it’s not easy to achieve. With multiple customers hosted on same server, it becomes necessary to design a rigid user access structure to ensure that customers have access only to their project on the MicroStrategy platform. What user management challenges does this model bring?

  1. It creates a dependency on the MicroStrategy administrator to create users with the correct user groups assigned.
  2. The administrator needs to be vigilant in assigning users to the correct access groups, which could lead to human errors.
  3. Lastly and most importantly, MicroStrategy has different components, such as security roles, security filters, and ACLs, to fill-in while creating users — a painful task for non-MicroStrategy users who are not accustomed to it.

This scenario represents a real-life problem with customers who have MicroStrategy-based multi-tenant products rolled out to 100s and 1000s of end users across different customers, geographies, and departments. If such an organization’s sales team wants to set up customer accounts on the MicroStrategy platform on demand of the end users wanting to attend the latest product demo, creating and configuring the huge list of user level security can become a huge task in itself. Imagine the complexity here with multiple tenant (customer) projects on production platform and three user groups (privilege, ACL, and security filter) for each tenant. It becomes painstakingly difficult and time-consuming to perform user creation for each attendee.

What’s the solution?

With these challenges, a portal to setup user accounts in MicroStrategy (just like it is done for any other web applications, such as Gmail) can be a real boon.

We designed and implemented a “user creation portal” for one of our clients with a similar need. This portal provides users with an interface through which they pass the user account details (login and password) and select the suitable groups (for instance, this could be based on work area like HR, operations, etc.), and the rest is handled by the MicroStrategy User Management workflow, designed by InfoCepts.

Essentially, the portal we developed helps with these two tasks:

  1. Creating MicroStrategy users based on user inputs.
  2. Updating the password/ enabling user accounts expired or disabled in MicroStrategy.
How we approached the project

Using MicroStrategy Transaction Services, a report is generated that allows users to enter inputs required to create a username, password, and to select the appropriate groups. The System Manager workflow then reads the data entered by the user from the database and takes the suitable action (user creation/modification). Success/failure logs are sent to the user through mail.

What MicroStrategy components does the solution utilize?

  1. MicroStrategy Command Manager to automate the user creation task.
  2. MicroStrategy System Manager to integrate the Command Manager tasks with non-MicroStrategy components.
  3. MicroStrategy Transaction Services to provide a UI interface for end-user data entry.
The benefits

This portal provides a number of benefits:

  1. Dependence on the MicroStrategy administrator for user management is no longer needed.
  2. Since the workflow is designed using System Manager, it can be deployed quickly across different client environments by changing few parameters.
  3. Users get success/failure logs via email, eliminating the need to monitor the logs manually.
  4. Coding is not used in this workflow, making it easy to understand and implement for end users.
Usage of the automation

User management is a very routine task in every project, and this automation can be easily adopted in situations where:

  • A multi-tenancy model is implemented.
  • The number of users is very high, thereby making administration efforts high.
  • Dependence on the MicroStrategy administrator for user creation/updating tasks is not practical.
The limitations

While the User Management portal solves a number of problems, it also comes with a few limitations. For one, any company using it would need to have MicroStrategy Transaction Services and System Manager licenses. In addition, the current workflow is designed to update only the password. Any further requirements would need to be addressed by modifying the underlying Command Manager script.

Want to learn more about how InfoCepts approaches BI automation and management projects? Reach out, and let’s find a time to talk.

_____________________________________

1 Multi-tenancy refers to content that is created once, maintained in one place, but delivered to multiple tenants/customers who view that same content using their own data.

Use Automation to Better Manage BI Software Licenses and Versions

Swapnil PawadeOctober 13, 2015

Managing the license and version information of your business intelligence (BI) software may not top your list of priorities, but it should. According to a 2013 Software Audit Industry Report, 53 percent of firms surveyed said they were audited within the past two years. What’s more, business intelligence (BI) software providers like Microsoft, Oracle, and IBM topped the list as three of the five most frequent auditors.

While the cost of non-compliance can be high, it’s not the only risk you face with expired software licenses and outdated versions. Insufficient software management can lead to overspending, lost time and productivity, and even malicious attacks from hackers looking for a way in to your system. Yet most software packages have complex licensing structures and come in numerous different versions, and as your organization grows, your software can proliferate quickly across 100 or more servers. Under these conditions, manual license and version tracking requires intense legwork to understand what licenses are on what server, who is using them, when they expire, and whether they are running the most updated version.

What’s the solution? Most BI administrators rely on manual tracking, but this method is complicated, time-consuming, and error-prone. At InfoCepts, we consider automation a top solution and have devised ways to automate license/version tracking for our clients. Below, we share the many benefits automation can bring and how we approach it.

Automated license and version tracking: the advantages

BI administrators have a lot on their plate, which explains why BI software management often falls by the wayside. Automation, however, makes it faster and easier to get the information you need to make important software decisions. In particular, automation can:

  • Reduce non-compliance — Automated license tracking helps you manage compliance of your BI tools by aggregating your information about software licenses and versions for you to review in a single report (or a series of reports). This, in turn, can help you avoid the devastating fees that can come with non-compliance. In 2013, for instance, Apptricity won a $50 million lawsuit against the U.S. Army for software piracy — they were running the software on a lot more servers and workstations than they had paid for. While the error may not have been intentional (it rarely is), the mistake was extremely costly.
  • Cut costs — With automation, you can identify under-used or obsolete software by generating data on what licenses you have, if they are being used, and by whom. Your team can then use this information to remove or reallocate underused software licenses and cut down on overhead  costs associated with unnecessary software. Likewise, the time you save by eliminating manual tracking can also decrease your costs. Consider, for example, that it takes a single person approximately 20 minutes per server to retrieve the necessary license and version information for four BI tools. Assuming this activity needs to happen once a month on, say, 100 servers, the time saved automating the process reaches around 33 hours a month — or 396 hours a year.
  • Boost productivity — When software licenses expire, your team can get hit with unexpected downtime. Instead of working on a critical project, users may find themselves unproductively waiting for software to be renewed and updated. It’s not an ideal situation, but one that automation can help you avoid.
  • Stay secure — Security remains a constant threat for companies of all sizes, and running outdated versions of BI software only increases the vulnerability. Even if you’re not currently running a specific software, any software residing on your system poses a risk. While many companies think network security is enough, the reality is that your network is only part of what you need to secure. Software is now a prime target, with 84 percent of all cyber-attacks in 2015 happening at the “application layer,” a Forbes article reports. So you want to do everything you can to keep your BI software secure.

Our approach to automated tracking

These benefits sound good, but automating license and version information tracking isn’t easy. At InfoCepts, we’ve created a flexible automation framework that users find simple to use and navigate. How does it work?

The automation uses a central script execution process that is compatible with all versions of the Windows operating system. The script runs on a single server, with the ability to scan more than 100 servers in approximately three hours. It’s also highly scalable and can function remotely, without manual intervention. Meanwhile, light weight code helps keep the resource utilization footprint small.

Currently, the automation script can track and manage license and version information for MicroStrategy, IBM Cognos, Oracle 10 G and 11 G, and Pentaho Kettle. However, because it’s built on a robust framework with a flexible architecture, the solution is designed to easily accommodate the tracking and management of more BI tools in the future.

So, if you run multiple BI tools across many servers, staying on top of license and version information manually just isn’t feasible — both from a manpower and a management perspective. Automating the process, however, makes it simple to gain fast access to the information you need to remain in compliance, stay secure, lower costs, and keep your organization operating with utmost efficiency.

Want to learn more about how InfoCepts approaches automation and BI application development? Reach out, and let’s find a time to talk.

Overcoming a Quality Assurance Hurdle with an Automated Testing Tool for Multi-language Dashboards

We created a fast and efficient automated quality assurance testing solution that allowed our client, a global leader in web communications and analytics, to develop customer-facing reporting dashboards in a wider range of languages.

Download