Menu
Log in
WILD APRICOT
TEACHERS ASSOCIATION

Log in

News & Announcements

Stay on top of all DAMA-RMC news and announcements here.

  • 10/02/2024 7:30 AM | Anonymous member (Administrator)


    Since Reference Data is a shared resource, it cannot be changed arbitrarily. The key to successful Reference Data Management is organizational willingness to relinquish local control of shared data. To sustain this support, provide channels to receive and respond to requests for changes to Reference Data. The Data Governance Council should ensure that policies and procedures are implemented to handle changes to data within reference and Master Data environments.

    Changes to Reference Data will need to be managed. Minor changes may affect a few rows of data. For example, when the Soviet Union broke into independent states, the term Soviet Union was deprecated and new codes were added. In the healthcare industry, procedure and diagnosis codes are updated annually to account for refinement of existing codes, obsoleting of codes, and the introduction of new codes. Major revisions to Reference Data impact data structure. For example, ICD-10 Diagnostic Codes are structured in ways very different from ICD-9. ICD10 has a different format. There are different values for the same concepts. More importantly, ICD-10 has additional principles of organization. ICD10 codes have a different granularity and are much more specific, so more information is conveyed in a single code. Consequently, there are many more of them (as of 2015, there were 68,000 ICD-10 codes, compared with 13,000 ICD-9s).

    The mandated use of ICD-10 codes in the US in 2015 required significant planning. Healthcare companies needed to make system changes as well as adjustments to impacted reporting to account for the new standard.

    Types of changes include:

    • Row level changes to external Reference Data sets
    • Structural changes to external Reference Data sets
    • Row level changes to internal Reference Data sets
    • Structural changes to internal Reference Data sets
    • Creation of new Reference Data sets

    Changes can be planned / scheduled or ad hoc. Planned changes, such as monthly or annual updates to industry standard codes, require less governance than ad hoc updates. The process to request new Reference Data sets should account for potential uses beyond those of the original requestor.

    Change requests should follow a defined process, as illustrated in this figure. When requests are received, stakeholders should be notified so that impacts can be assessed. If changes need approval, discussions should be held to get that approval. Changes should be communicated.

  • 09/25/2024 10:32 AM | Anonymous member (Administrator)


    There are several basic architectural approaches to reference and Master Data integration. Each Master Data subject area will likely have its own system of record. For example, the human resource system usually serves as the system of record for employee data. A CRM system might serve as the system of record for customer data, while an ERP system might serve as the system of record for financial and product data.

    The data sharing hub architecture model shown in this figure represents a hub-and-spoke architecture for Master Data. The Master Data hub can handle interactions with spoke items such as source systems, business applications, and data stores while minimizing the number of integration points. A local data hub can extend and scale the Master Data hub. (See Chapter 8.)

    Each of the three basic approaches to implementing a Master Data hub environment has pros and cons:

    • A Registry is an index that points to Master Data in the various systems of record. The systems of record manage Master Data local to their applications. Access to Master Data comes from the master index. A registry is relatively easy to implement because it requires few changes in the systems of record. But often, complex queries are required to assemble Master Data from multiple systems. Moreover, multiple business rules need to be implemented to address semantic differences across systems in multiple places.
    • In a Transaction Hub, applications interface with the hub to access and update Master Data. The Master Data exists within the Transaction Hub and not within any other applications. The Transaction Hub is the system of record for Master Data. Transaction Hubs enable better governance and provide a consistent source of Master Data. However, it is costly to remove the functionality to update Master Data from existing systems of record. Business rules are implemented in a single system: the Hub.
    • A Consolidated approach is a hybrid of Registry and Transaction Hub. The systems of record manage Master Data local to their applications. Master Data is consolidated within a common repository and made available from a data-sharing hub, the system of reference for Master Data. This eliminates the need to access directly from the systems of record. The Consolidated approach provides an enterprise view with limited impact on systems of record. However, it entails replication of data and there will be latency between the hub and the systems of record.
  • 09/18/2024 7:00 AM | Anonymous member (Administrator)


    Key processing steps for MDM are illustrated in this Figure. They include data model management; data acquisition; data validation, standardization, and enrichment; entity resolution; and stewardship and sharing.

    In a comprehensive MDM environment, the logical data model will be physically instantiated in multiple platforms. It guides the implementation of the MDM solution, providing the basis of data integration services. It should guide how applications are configured to take advantage of data reconciliation and data quality verification capabilities.

  • 09/11/2024 7:00 AM | Anonymous member (Administrator)


    In any organization, certain data is required across business areas, processes, and systems. The overall organization and its customers benefit if this data is shared and all business units can access the same customer lists, geographic location codes, business unit lists, delivery options, part lists, accounting cost center codes, governmental tax codes, and other data used to run the business. People using data generally assume a level of consistency exists across the organization, until they see disparate data.

    In most organizations, systems and data evolve more organically than data management professionals would like. Particularly in large organizations, various projects and initiatives, mergers and acquisitions, and other business activities result in multiple systems executing essentially the same functions, isolated from each other. These conditions inevitably lead to inconsistencies in data structure and data values between systems. This variability increases costs and risks. Both can be reduced through the management of Master Data and Reference Data.

  • 09/04/2024 7:00 AM | Anonymous member (Administrator)

    Documents, records, and other unstructured content represent risk to an organization. Managing this risk and getting value from this information both require governance. Drivers include:

    • Legal and regulatory compliance
    • Defensible disposition of records
    • Proactive preparation for e-discovery
    • Security of sensitive information
    • Management of risk areas such as email and Big Data

    Principles of successful Information Governance programs are emerging. One set of principles is the ARMA GARP® principles (see Section 1.2). Other principles include:

    • Assign executive sponsorship for accountability
    • Educate employees on information governance responsibilities
    • Classify information under the correct record code or taxonomy category
    • Ensure authenticity and integrity of information
    • Determine that the official record is electronic unless specified differently
    • Develop policies for alignment of business systems and third-parties to information governance standards
    • Store, manage, make accessible, monitor, and audit approved enterprise repositories and systems for records and content
    • Secure confidential or personally identifiable information
    • Control unnecessary growth of information
    • Dispose information when it reaches the end of its lifecycle
    • Comply with requests for information (e.g., discovery, subpoena, etc.)
    • Improve continuously

    The Information Governance Reference Model (IGRM) shows the relationship of Information Governance to other organizational functions. The outer ring includes the stakeholders who put policies, standards, processes, tools and infrastructure in place to manage information. The center shows a lifecycle diagram with each lifecycle component within the color or colors of the stakeholder(s) who executes that component. The IGRM complements ARMA’s GARP®.

    Sponsorship by someone close to or within the ‘C’ suite is a critical requirement for the formation and sustainability of the Information Governance program. A cross-functional senior level Information Council or Steering Committee is established that meets on a regular basis. The Council is responsible for an enterprise Information Governance strategy, operating procedures, guidance on technology and standards, communications and training, monitoring, and funding. Information Governance policies are written for the stakeholder areas, and then ideally technology is applied for enforcement. 

  • 08/28/2024 7:00 AM | Anonymous member (Administrator)


    Discovery is a legal term that refers to pre-trial phase of a lawsuit where both parties request information from each other to find facts for the case and to see how strong the arguments are on either side. The US Federal Rules of Civil Procedure (FRCP) have governed the discovery of evidence in lawsuits and other civil cases since 1938. For decades, paper-based discovery rules were applied to e-discovery. In 2006, amendments to the FRCP accommodated the discovery practice and requirements of ESI in the litigation process.

    Other global regulations have requirements specific to the ability of an organization to produce electronic evidence. Examples include the UK Bribery Act, Dodd-Frank Act, Foreign Account Tax Compliance Act (FATCA), Foreign Corrupt Practices Act, EU Data Protection Regulations and Rules, global anti-trust regulations, sector-specific regulations, and local court procedural rules.

    Electronic documents usually have Metadata (which may not be available for paper documents) that plays an important part in evidence. Legal requirements come from the key legal processes such as e-discovery, as well as data and records retention practices, the legal hold notification (LHN) process, and legally defensible disposition practices. LHN includes identifying information that may be requested in a legal proceeding, locking that data or document down to prevent editing or deletion, and then notifying all parties in an organization that the data or document in question is subject to a legal hold.

    This figure depicts a high-level Electronic Discovery Reference Model developed by EDRM, a standards and guidelines organization for e-discovery. This framework provides an approach to e-discovery that is handy for  people involved in identifying how and where the relevant internal data is stored, what retention policies apply, what data is not accessible, and what tools are available to assist in the identification process.  

    The EDRM model assumes that data or information governance is in place. The model includes eight e-discovery phases that can be iterative. As e-discovery progresses, the volume of discoverable data and information is greatly reduced as their relevance is greatly increased.

    The first phase, Identification, has two sub-phases: Early Case Assessment and Early Data Assessment (not depicted in the diagram). In Early Case Assessment, the legal case itself is assessed for pertinent information, called descriptive information or Metadata (e.g., keywords, date ranges, etc.). In Early Data Assessment, the types and location of data relevant to the case is assessed. Data assessment should identify policies related to the retention or destruction of relevant data so that ESI can be preserved. Interviews should be held with records management personnel, data custodians or data owners, and information technology personnel to obtain pertinent information. In addition, the involved personnel need to understand the case background, legal hold, and their role in the litigation.

    The next phases in the model are the Preservation and Collection. Preservation ensures that the data that has been identified as potentially relevant is placed in a legal hold so it is not destroyed. Collection includes the acquisition and transfer of identified data from the company to their legal counsel in a legally defensible manner.

    During the Processing phase data is de-duplicated, searched, and analyzed to determine which data items will move forward to the Review phase. In the Review phase, documents are identified to be presented in response to the request. Review also identifies privileged documents that will be withheld. Much of the selection depends on Metadata associated with the documents. Processing takes place after the Review phase because it addresses content analysis to understand the circumstances, facts and potential evidence in litigation or investigation and to enhance the search and review processes.

    Processing and Review depend on analysis, but Analysis is called out as a separate phase with a focus on content. The goal of content analysis is to understand the circumstances, facts, and potential evidence in litigation or investigation, in order to formulate a strategy in response to the legal situation.

    In the Production phase, data and information are turned over to opposing counsel, based on agreed-to specifications. Original sources of information may be files, spreadsheets, email, databases, drawings, photographs, data from proprietary applications, website data, voicemail, and much more. The ESI can be collected, processed and output to a variety of formats. Native production retains the original format of the files. Near-native production alters the original format through extraction and conversion. ESI can be produced in an image, or near paper, format. Fielded data is Metadata and other information extracted from native files when ESI is processed and produced in a text-delimited file or XML load file. The lineage of the materials provided during the Production phase is important, because no one wants to be accused of altering data or information provided.

    Displaying the ESI at depositions, hearings, and trials is part of the Presentation phase. The ESI exhibits can be presented in paper, near paper, near-native and native formats to support or refute elements of the case. They may be used to elicit further information, validate existing facts or positions, or persuade an audience.

  • 08/21/2024 7:00 AM | Anonymous member (Administrator)


    Document management includes records management. Managing records has special requirements. Records management includes the full lifecycle: from record creation or receipt through processing, distribution, organization, and retrieval, to disposition. Records can be physical (e.g., documents, memos, contracts, reports or microfiche); electronic (e.g., email content, attachments, and instant messaging); content on a website; documents on all types of media and hardware; and data captured in databases of all kinds. Hybrid records, such as aperture cards (paper record with a microfiche window imbedded with details or supporting material), combine formats. A Vital Record is type a record required to resume an organization’s operations the event of a disaster.

    Trustworthy records are important not only for record keeping but also for regulatory compliance. Having signatures on the record contributes to a record’s integrity. Other integrity actions include verification of the event (i.e., witnessing in real time) and double-checking the information after the event.

    Well-prepared records have characteristics such as:

    • Content: Content must be accurate, complete and truthful.
    • Context: Descriptive information (Metadata) about the record’s creator, date of creation, or relationship to other records should be collected, structured and maintained with the record at the time of record creation.
    • Timeliness: A record should be created promptly after the event, action or decision occurs.
    • Permanency: Once they are designated as records, records cannot be changed for the legal length of their existence.
    • Structure: The appearance and arrangement of a record’s content should be clear. They should be recorded on the correct forms or templates. Content should be legible, terminology should be used consistently.

      Many records exist in both electronic and paper formats. Records Management requires the organization to know which copy (electronic or paper) is the official ‘copy of record’ to meet record keeping obligations. Once the copy of record is determined, the other copy can be safely destroyed.

  • 08/14/2024 9:31 AM | Anonymous member (Administrator)


    Exciting updates are coming to the Certified Data Management Professional (CDMP) certification exams starting in October 2024. The exams will now cover the DAMA-DMBOK2 Revised Edition, an updated and improved version of the Data Management Body of Knowledge. This revised edition addresses known inconsistencies and inaccuracies, making it a more reliable and comprehensive resource for data management professionals. For those preparing for the CDMP certification, it's important to note that this revised edition will be the new reference material.

    The DAMA-DMBOK2 Revised Edition has been created to ensure the content remains relevant and accurate for data management practitioners. DAMA International embarked on this update to improve upon the previous version, incorporating feedback from members and volunteers. The revised edition aims to provide a more consistent and precise framework, making it easier for professionals to understand and apply the principles of data management.

    Key improvements in the DAMA-DMBOK2 Revised Edition include standardized terminology and acronyms, corrections of typos and errors, improvements to context diagrams, and enhancements to the Data Quality chapter. These updates ensure the content is not only accurate but also easier to comprehend and apply. The revised edition is available for purchase (and discounted for Professional Members) through DAMA-RMC at DMBoK.

    For those currently preparing for the CDMP certification, you can continue using the DAMA-DMBOK 2nd Edition until October 2024. After that date, the revised edition will be the authoritative resource.

    We encourage you to join one of our upcoming 30-minute informational sessions on the CDMP Study Group during the week of August 26th. You can also enroll in our 12-week virtual study program which will start in September. To enroll in the study program, you must be a Professional Member anyone can attend the informational sessions. We also have a self-paced study option available.

    Information session links:

    For more information, please contact jhorner@dama-rockymountainchapter.org or visit the CDMP Webpage.  Additionally, please update your DAMA RMC profile to indicate your interest in the CDMP so we can keep you update to date on all things DMBoK and CDMP.


Featured Articles

Featured articles coming soon!

Not a member yet?
Join us now

Quick links

Follow our activities

© DAMA-RMC 2022

Powered by Wild Apricot Membership Software