Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How to enhanceyour Business Value using Data Quality (DQ) Tools

This blog post lists out some of the core concepts  of Data Quality assessment, perception changes in data quality, data quality management over the years, use of data quality tools to parse, standardize and cleanse data.

What is Quality Information?

There has been a change in the conventional focus on data usage with regard to how data sets support operation of transactional systems. Data quality or data fitness in today’s world is perceived in the context of reuse, repurpose and evaluated in line with conformance to business rules.

Why do we need DQ Tools and Techniques?

The fundamental steps in improving Data quality continue to be the same; however we need to rely on data quality tools for the following reasons:

  • Evaluate when data errors exist
  • Assess severity of the problem
  • Get rid of Root cause and correct the data
  • Inspect and Monitor
  • Enhance quality of data

Key Aspects of DQ Management

Figure 1: Data Quality Management – Key aspects

Data Cleansing

Figure 2: Data Cleansing

Data cleansing is performed by combining the business rules definition with the software designed to execute the outlined rules.

Let us assume that a data value in a character string is incorrect. To correct the incorrect string, we make use of the transformative capability of the business rule.

Hence STREET could mean a Road, Street, avenue and so on. The business rule for STREET could be defined as follows:

  • STR = STREET
  • ST =STREET
  • St = STREET
  • St. = STREET
  • Str = STREET
  • Str. = STREET

The string 1000 Main Str. will be transformed to 1000 MAIN STREET.

Challenge

If we were to transform a street name “St. Patricks St.” by applying the above simple business rule, the end result would be “STREET PATRICKS STREET”. The basic rule will not suffice and calls for more control.

When the same input maps to two different outputs, ordering the rules will fail. We need an approach that accommodates both context and order of execution.

Adjusting the rule set

This approach can be used to resolve the data cleansing rule conflicts wherein we can adjust the rule set to a certain extent to ensure distinction of abbreviation and then phasing the application of rules.

If we have two rules that share an equivalent input but have different outputs, using the form:

  • R1: Transform string X into string Y1
  • R2: Transform string X into string Y2

Then a modification to the rule set to break that conflict could work if we first correct all instances of one type of context-dependent inputs into a modified form and then apply modified rules during a second pass. Here is another stab at modifying our sample rules into two passes.

Pass 1:

  1. St is transformed into __STREET__ at the end of a street name
  2. St. is transformed into __STREET__ at the end of a street name

Pass 2:

  1. St is transformed into SAINT
  2. St. is transformed into SAINT
  3. __STREET__ is transformed into STREET

Adding Contextual Constraints

By adding Contextual constraints the problem is solved as follows but when a new context is introduced it becomes an issue again necessitating another context rule

  1. St is transformed into SAINT at the beginning of a street name
  2. St. is transformed into SAINT at the beginning of a street name
  3. St is transformed into STREET at the end of a street name
  4. St. is transformed into STREET at the end of a street name

Address Quality

Managing the quality of master Address and location data involves reviewing a lot of the existing documentation that has been collected from a number of different operational systems, as well as reviewing the business processes to see where location data is created, modified, or read. There are likely to be many references to operations or transformations performed on addresses, mostly with the intent of improving the quality of the address.

The transformations are referred to using varied terms such as validation; verification; standardization; cleansing; correction.

Deliverable Address

When an item is sent to an addressed location, the following concepts need to be right:

    1. The item must be directed to a specific recipient party (either an individual or an organization).

This implies a direct connection between the sender and the recipient. The corresponding business rule suggests that the recipient must be identifiable to the sender

    1. The address must be a deliverable address.
      This means that the address must carry enough information to enable a carrier to locate the address as a prelude to delivery
    2. The intended recipient must be associated with the deliverable address.
      This establishes a direct dependencebetween the recipient and the addressed location, implying awareness of thatconnection
    3. The delivery address must conform to the USPS® standard.
      An address must be complete, which by definition implies that it can be matched with current Postal Service ZIP + 4® and City State files.

USPS Addresses

Verification

A complete address matches the USPS files and has the correct ZIP + 4.

The address components must be consistent with the postal standard in terms of valid and invalid values. For example,if the USPS file says that Main Street goes from 1-100, an address with 105 Main St is invalid because it is outside the range of recognized numbers.

Validation

Validation here indicates that the street address is predictable with what is reported by the USPS documents.

Standardization

Also means that the address uses the standard USPS abbreviations

Non USPS Addresses

In many smaller and rural communities, the USPS does not deliver directly to homes and businesses; rather, it doles out PO boxes to every address in the community and only delivers mail to the PO Box. When a street address is verified using the USPS database there is every probability that the non-USPS address will turn out to be invalid or bad.
The failure to check non-USPS addresses can be a problem for many businesses. However, a couple of data vendors are able to aggregate contact data from multiple sources to distinguish and confirm these non-USPS addresses. Some vendors additionally offer the choice of geocoding these non-USPS addresses to the rooftop level.

Address Standardization

Let us illustrate with an example. Let us assume that we need to deliver an item to a particular individual at a particular location engaging an agent. We feel that it is pretty easy for the agent to locate if we provide the street address alone because the standard address format contains a hierarchical breakdown for refining the location at finer levels of precision.This process works in the US because of the existing postal standards.

Challenges

The problem occurs when, for some reason, the address does not conform to the standard. If the address is slightly distorted (e.g. it is missing a postal code), the odds are still great that the location can be identified. If the address has significant issues (e.g. the street number is missing, there is no street, the postal code is conflicting with the city and state, or other components are missing), settling the location becomes a hassle.
There are two ways to try to deal with this problem:

    1. Treat each non-standard address as an exception
  1. Fix the problem earlier in the process by trying to transform a non-standard address into one that conforms to the standard.

Techniques to address Standardization

Changing the address into a standard form will streamline the geocoding process. This change procedure leverages the following ideas:

  1. There is a representative model for “standardized” addresses with its accompanying formats, syntax, acceptable value lists, and rules.
  2. An application is able to scan a non-standardized (or what I called a “provided descriptive”) address, differentiate between the parts that are good and the ones that do not meet the standard.
  3. There is anapproach to outlining the non-standard parts into standard ones.

Formats, Syntax

Standard representation of data helps us to find out if a value meets the standard or not. For example; A street address can be split into three components:
Number, Name and a Street type

Particulars to decide address legitimacy

  1. The number must be a positive integer number
  2. The name must have one, and only one word
  3. The street type must be one of the following: RD, ST, AV, PL, or CT

Defining the Rules

  1. The address string must have three components to it (format)
  2. The first component has to only have characters that are digits 0-9 (syntax)
  3. The first character of the first component cannot be a ‘0’ (syntax)
  4. The third component must be of length 2 (format)
  5. The third component has to have one of the valid street types (content)

Data Enhancement

Data Enhancement is a procedure that adds data to an informational index (data set) to enhance its potential utility.

There are various diverse ways that data sets can be enhanced, including adapting values to meet defined benchmarks, applying data amendments, and adding additional attributes.
Data Enhancements in Business

Figure 4: Data Enhancements – Business

Operational Purposes
Let us explain how data enhancement can be incorporated in operations.
Names of individuals appear in data records in different ways such as:

  1. first name followed by last name
  2. last name with a comma, followed by first name
  3. with or without titles such as “Mr.” or “Dr”

A data standardization andenhancement process can parse out the key components of a person’s name, fillin the details (if necessary) through lookups in master information tables, and reorganizethose components into a format so that a customer’s identity can be set up forverification purposes.

Analytical Purposes

Upgrades are mostly done to provide additional detail forreporting and analysis purposes. Enhancement goes beyonddata standardization and correction;rather, the upgrade procedure can include more data by connecting one data set to another. The appended data canaugment an analytical process to include extra information in reports andinteractive visualizations.

For example; a retail company can take sales data that includes Zip code value and then enhance the data with demographic profiles provided by the Census Bureau to look for correlation between purchasing designs and documented socioeconomics about the specific locations.
Standardizing names and addresses, and linking those records to the reference data collections allows coordinate linkage in light of the particular criteria, ranging from gross-level linkage (say, at the district level) down to specific enhancement at the individual level (such as the names of the products to which a customer subscribes). These qualitative enhancements enlarge the business knowledge and analytics processes to help organizations tomake more deals, increment incomes, and enhance productivity.

Incorporating diagnostic outcomes with operational exercises give viable proposals to enhance client bolster or even drive extra deals.

Record Linkage and Matching

Figure 5: Record Linkage & Mapping

There are many distributed sources of data about customers, and each individual piece of gathered information holds a smidgen of significant worth. But when these disseminatedbits of information are merged together, they can be used to recreate an extraordinarily adroit profile of the client.


How are different datarecords from disparate data sets consolidated to define sagacious profiles?

The answer lies in the ability to gather the diverse bits of information that belongto a single individual and after that glom them together. For example, let’s presumethe existence of a record in one data set that has a person’s address, a record inanother data set that has that person’s telephone number, a third record that hasthat person’s enlistment number for a convection oven, another with the person’s car year,make, and model, etc. As long as you can find all the records that are associatedwith each person and connect them together, you could gather all the intriguing data together and make a solitary delegate profile. That profile is thensuitable for use in list generation, but is also used for more comprehensive analyticssuch as segmentation, clustering analysis, and classification.

The way these records are associated together is through a process called “record linkage.” This process searches through one or more data sets looking for records that refer to the same unique entity based on identifyingcharacteristics thatcan be used todistinguish one entityfrom all others, suchas names, addresses,or telephone numbers. When two records are found to share the same piecesof identifying information, you may expect that those records can be connectedtogether.
Challenges

  1. The records from the different data sets don’t have the same identifying attributes (one might have telephone number but the other one does not).
  2. The values in one data set use analternate structure or configuration than the data in another data set (such as using hyphens for social security numbers in one data set but not in the other).
  3. The values in one data set are slightly different than the ones in the other data set (such as using nicknames instead of given names).
  4. One data set has the values broken out into separate data elements while the other does not (such as titles and name suffixes).

One of the most interesting byproducts of record linkage is the ability to infer explicit facts about individuals that are obfuscated as a result of distribution of data.

Conclusion

Data quality tools such as parsing, standardization, and business-rule based record linkage and similarity scoring can help in assessing the similarity between two records.Organizations have to be proactively engaged in managing data quality reducing the reactive culture. Methodologies and techniques utilized for reviewing data errors, root cause analysis, designing and applying remedial measures have to be standardized.

About the Author
Karthik M, Associate Architect at MastechInfotrellis has an overall experience of 6 years in enterprise applications, profiling, and performance tuning and design best practices.

The post How to enhanceyour Business Value using Data Quality (DQ) Tools appeared first on Mastech Infotrellis.



This post first appeared on InfoTrellis Master Data Management, please read the originial post: here

Share the post

How to enhanceyour Business Value using Data Quality (DQ) Tools

×

Subscribe to Infotrellis Master Data Management

Get updates delivered right to your inbox!

Thank you for your subscription

×