As NCQA accelerates the transition to digital quality measures (dQMs) for HEDIS® reporting, the ability to accurately and efficiently calculate these measures hinges on the availability of high-quality, standardized clinical data. For payers, this shift represents both an opportunity and a challenge: while dQMs promise greater scalability, automation, and alignment with real-world care delivery, they also demand a level of data consistency and computability that is often lacking in real-world clinical sources.
Clinical data typically arrives in a highly variable state. It may include outdated codes, local or proprietary vocabularies, free-text clinical documentation, or even partial records from disparate EHR systems. This variability creates a major barrier to using this data for digital HEDIS® measurement. Without semantic alignment, critical elements such as lab results, screenings, diagnoses, and encounter details may be missed or misclassified, leading to inaccurate measure calculations, increased manual review, and decreased completeness scores.
This session introduces Smile Digital Health’s Semantic Standardization Service, an AI-enhanced solution designed to prepare real-world clinical data for high-stakes use cases like HEDIS® dQMs. Built on Smile’s scalable, FHIR-native data platform, the service uses a layered approach that combines: FHIR-based transformation pipelines to map and normalize data structures Terminology services that standardize codes using authoritative vocabularies including SNOMED CT, LOINC, ICD-10, CPT, and RxNorm AI-assisted natural language processing (NLP) to extract clinical intent from unstructured notes and map it to computable codes Version-aware code management to ensure longitudinal accuracy over evolving code systems Validation workflows that assure downstream measure logic receives clean, complete, and properly typed data Attendees will explore how semantic standardization enables the successful execution of clinical logic required by HEDIS® measures, including early detection of numerator-qualifying events, correct identification of denominators and exclusions, and the extraction of results and values from diverse formats and sources.
Through real-world examples, the session will show how this capability delivers reliable, quality-enhancing data sources for payers—reducing the need for manual abstraction, chart chases, or supplemental data collection. Key takeaways will include: The role of semantic normalization in enabling scalable, accurate, and automated digital quality measurement A look into the technical architecture supporting semantic enrichment, including AI and NLP How semantic alignment supports broader payer priorities such as value-based care, risk adjustment, and regulatory reporting By adopting semantic standardization practices, Payer data will be measurement-grade, standardized, and computable clinical content. This translates directly into greater operational efficiency, better measure completeness, and stronger performance in quality programs that influence ratings, reimbursement, and member outcomes.

