AI in Healthcare: Cutting Through the Noise & Overcoming Data Barriers for Success
Abstract (TL;DR)
U.S. healthcare stands to gain immensely from AI, but technical data barriers impede adoption. Key challenges include; poor data quality, fragmented and siloed datasets, interoperability gaps, outdated legacy systems, inconsistent governance and standards, and stringent privacy and regulatory constraints. These issues—compounded by massive data volumes and “black box” AI concerns—make it difficult to integrate AI into clinical workflows. Overcoming these barriers requires robust data governance, modern infrastructure, and rigorous validation. Management consulting firms like Mesh Digital LLC can guide healthcare organizations in improving data readiness, ensuring compliance, and building trustworthy, workflow-friendly AI solutions that unlock healthcare’s digital potential.
Executive Summary
Artificial intelligence (AI) offers transformative potential for U.S. healthcare—from predicting patient deterioration to automating administrative tasks. Yet, despite countless pilot projects and proven algorithms, AI adoption in clinical practice remains slow. A primary reason is technical data-related barriers that prevent AI solutions from being developed, integrated, or trusted at scale. This insights article examines key technical barrieBrs to AI in healthcare—such as; data quality, silos, interoperability, legacy infrastructure, governance, privacy, and more—and how they constrain innovation. It also highlights real-world examples of organizations overcoming these hurdles and offers how expert partners like Mesh Digital LLC can help healthcare organizations address these challenges.
Fragmented Data: Quality, Silos, and Interoperability
Data quality and Harmonization Issues
Data quality and harmonization issues are fundamental barriers. Healthcare data are often incomplete, error-prone, or inconsistent in format. For example, an estimated 80% of medical data is unstructured and goes untapped (free-text notes, images, signals) because it’s difficult to integrate into traditional databases (Kong, 2019).
This means vast troves of clinical information are not readily usable for AI modeling. Even structured data can suffer from inconsistencies—different hospitals may use varying codes or units for the same lab result, requiring labor-intensive harmonization before any algorithm can learn from it. It’s no surprise that in one survey, 33% of health system executives identified poor data quality as a top barrier to scaling digital and AI initiatives (Eastburn et al., 2024). Without reliable, standardized data, even the best algorithms will produce unreliable results.
Interoperability Challenges
Compounding the problem, critical healthcare data often reside in silos. Patient information is fragmented across electronic health record (EHR) systems, laboratory systems, pharmacies, insurers, and countless other sources. Accessing a complete dataset for an AI project—say, to predict hospital readmissions—means pulling data from many systems that don’t readily talk to each other. Despite progress under federal initiatives, true interoperability remains limited. As of 2023, only 43% of U.S. hospitals report routinely engaging in all four domains of electronic data exchange (sending, receiving, finding, and integrating data). While 71% of hospitals say they can access outside patient data, just 42% report that their clinicians actually use it routinely at the point of care (Gabriel et al., 2024). In other words, even when technical exchange is possible, the data often isn’t flowing in a useful way. Siloed and non-integrated data hinder AI algorithms that thrive on comprehensive information.
Technical Debt: Legacy Systems and Outdated Infrastructure
Legacy systems and outdated infrastructure exacerbate these interoperability challenges. Many provider organizations still run decades-old software or on-premises data centers that struggle to handle modern data loads and integration needs. In a 2024 industry survey, hospital leaders cited challenges with legacy IT systems as the second-greatest obstacle to digital transformation—behind only budget constraint (Eastburn et al., 2024). Older EHR platforms may lack APIs for data sharing or require costly interfaces to connect with new AI tools. These monolithic systems become “walled gardens” of data. Additionally, legacy infrastructure may not scale for today’s data volume and variety. Healthcare data is exploding in size and types—electronic records, high-resolution medical imaging, genomic sequences, wearable sensor readings, patient-reported data, and more. Astonishingly, about 30% of the world’s data volume is generated by the healthcare industry, and healthcare data is projected to grow at 36% annually through 2025 (Hassan et al., 2024), a growth rate outpacing most other sectors. Many hospital IT environments were never designed for this big data era. They face storage, processing, and network bottlenecks that impede AI model training and deployment. Without modernization, these infrastructure limitations form a hard technical ceiling on AI adoption.
Data Governance and Standards
Improving data quality and interoperability will require concerted data governance and standardization efforts across the healthcare enterprise. Currently, adoption of common data standards like HL7 FHIR (Fast Healthcare Interoperability Resources) and consistent clinical terminologies is incomplete. A lack of enterprise data governance means each department might manage and label data differently. This inconsistency forces each AI project to start with extensive data cleaning and mapping. Leading organizations are addressing this by establishing data governance committees, enterprise data warehouses or lakes, and standardized vocabularies for key data elements. Aligning around standards (for example, using the U.S. Core Data for Interoperability dataset as a baseline) can significantly streamline AI development. Breaking down silos through unified data platforms or integration middleware is equally important. As described in the case studies below, some healthcare leaders have managed to create centralized, interoperable data environments that enable AI at scale.
Privacy, Security, and Regulatory Constraints
Healthcare data is among the most sensitive information, so any AI initiative must navigate strict privacy regulations and compliance requirements. Laws like the Health Insurance Portability and Accountability Act (HIPAA) govern how patient data can be used and shared, with severe penalties for violations. This legal environment can stifle data sharing between organizations and even within organizations if not carefully managed. Hospitals may be hesitant to share data with third-party AI developers or cloud platforms due to fear of breaches or non-compliance. De-identifying patient data for AI training is possible, but it adds complexity and must be done thoroughly to prevent any re-identification risk. Additionally, some datasets (such as mental health or substance abuse treatment records under 42 CFR Part 2) have extra protections, limiting their availability for building AI models. Regulatory compliance thus often slows down or constrains AI projects until proper data use agreements, business associate contracts, and security measures are in place.
Cybersecurity Defense
Cybersecurity challenges represent another major technical barrier closely tied to data concerns. Healthcare has become a prime target for cyberattacks, and large-scale data breaches are unfortunately common. Any new system that stores or transmits health data—such as an AI analytics platform—expands the threat surface unless robustly secured. The industry has seen a worrying trend of breach incidents, which can erode trust in digital solutions. In 2023 alone, 548 healthcare data breaches were reported to the U.S. Department of Health & Human Services, compromising nearly 122 million individuals’ records – a record high, and almost quadruple the number of affected people in 2022 (Deo, 2024).
This sobering reality means that AI systems must meet very high security standards. Health organizations need to invest in encryption, access controls, network security, and continuous monitoring to protect data used in AI, whether on-premise or in the cloud. Moreover, they must anticipate novel risks like adversarial attacks on AI models (where malicious inputs could trick an algorithm) or model inversion attacks that could potentially expose training data. The HHS AI Strategic Plan explicitly flags data breaches and biosecurity as concerns, calling for guidelines to safeguard AI models and health data from such threats (Kagan et al., 2025). Clearly, without strong privacy and security provisions, both patients and providers will remain wary of AI, slowing its adoption.
Another facet of compliance involves regulatory oversight of AI algorithms themselves, particularly those used for clinical decision-making. The FDA is increasingly evaluating AI/ML-based medical software for safety and effectiveness. Healthcare AI tools intended for diagnosis or treatment (for example, an AI that reads radiology images) may require FDA clearance or approval. Ensuring an AI model meets regulatory requirements can be a technical challenge: it must be trained on high-quality data, demonstrate sufficient accuracy and consistency, and include thorough documentation of its behavior. Additionally, regulators (and hospital review boards) are pushing for algorithms to avoid biases against protected groups and to be transparent about their functioning (linking to the explainability issue). In sum, the regulatory environment demands that AI solutions not only be innovative, but also responsible and compliant by design. This necessity can slow down development but ultimately leads to safer, more trustworthy tools.
Transparency and the “Black Box” Problem
Even when data and compliance issues are addressed, lack of transparency and explainability in AI models can hinder adoption. Many powerful healthcare AI systems—such as deep learning neural networks—operate as “black boxes,” meaning their internal decision logic is not easily interpretable by humans. Clinicians, who are trained in evidence-based practice, naturally hesitate to trust an algorithm’s output without understanding the rationale. For instance, one study reported that clinicians were uncomfortable using an AI-driven readmissions prediction tool because "they could not determine which patient factors were driving the model’s predictions," undermining their trust in the results (Hassan et al., 2024).
This aligns with broader trends: in a recent survey, "86% of Americans said their biggest concern with healthcare AI is the lack of transparency about how information is sourced or validated" (Rebelo, 2023). Physicians share this concern—"89% of doctors say they need AI tool vendors to clearly explain where the data comes from and how the algorithm works" before they would feel comfortable relying on it (Team, 2024). In critical domains like healthcare, an AI can’t be a mysterious “black box.” Practitioners need confidence that the AI’s advice makes sense and aligns with clinical reasoning.
Transparency is not only a technical issue but also a cultural one. Trust is essential for AI adoption: doctors, nurses, and patients all must trust that the tool is accurate and fair. Each stakeholder might ask, “How did the AI get to this conclusion?” and “Can I rely on it for this decision?” If those questions can’t be answered, the AI output may simply be ignored or overridden. In practice, this has led to under-utilization of some implemented AI systems. To overcome this, developers are increasingly focusing on explainable AI (XAI) techniques—methods to make AI decisions more interpretable. For example, an AI that flags a patient at high risk for sepsis might also highlight the specific vital signs and lab results most responsible for that risk score, giving clinicians insight into the “why” behind the alert. HHS and the National Academy of Medicine have emphasized the importance of such transparency; one report notes that every healthcare AI tool should be accompanied by information on its logic, limits, and potential biases as a “priority” for safe use (ONC, 2024). While techniques to achieve explainability are still evolving, the goal is clear: AI must augment clinical judgment, not mystify it. Solutions that offer clear, user-friendly explanations and can be validated against clinical knowledge will find far warmer reception in the healthcare community.
Integration into Clinical Workflow and Operations
Even a well-designed, transparent AI tool can fall flat if it doesn’t integrate smoothly into clinical workflows. Healthcare environments are fast-paced and complex; providers use a multitude of software systems and follow established protocols for patient care. Introducing an AI solution often means asking clinicians to incorporate a new step or consult a new interface during their routine. If this process is cumbersome—say, logging into a separate AI application to view risk predictions—busy healthcare professionals are unlikely to adopt it consistently. Workflow integration has thus become a make-or-break factor for AI projects. In fact, variability in workflows across different units was identified as a barrier in implementing an AI model for hospital readmissions; the tool succeeded in some departments but faltered in others that hadn’t aligned their processes, illustrating how lack of integration and standardized workflow can hinder AI success (Hassan et al., 2024). The lesson is that AI can’t exist in a vacuum—it must embed into existing clinical systems (like the EHR) and fit naturally into the user’s routine.
Key workflow considerations include user interface and alert design. If an AI system provides an alert or recommendation, it must be delivered in a manner that clinicians find helpful, not disruptive. Many early AI or clinical decision support tools learned this the hard way: firing off too many alerts leads to “alarm fatigue,” causing users to ignore or disable them. Successful integration often involves refining the sensitivity of AI alerts to minimize false alarms and ensure that when the AI speaks up, it truly adds value. Timing is also critical—an AI-driven insight is most useful at the point of care (for example, an early warning about patient deterioration should come while there’s still time to intervene, not after the fact). Integrating AI outputs into the same interface where clinicians already look (their EHR dashboard or rounding report) greatly increases the likelihood of adoption. There are encouraging examples of integration: some hospitals have added AI-generated risk scores directly into EHR patient lists, so a doctor sees a highlighted “sepsis risk” icon next to certain names during routine chart review, prompting early action. When AI tools align with clinicians’ workflow and usability expectations, they augment rather than obstruct the care process.
Operational Integration
Beyond clinical workflows, operational integration matters too. AI solutions should mesh with healthcare organizations’ IT operations and data pipelines. This often requires new infrastructure for data streaming and real-time analytics. Some hospitals have built centralized AI platforms that connect to all departments, ensuring models can be deployed and monitored in a uniform way. Others leverage cloud-based services to offload intensive computation, while still embedding results back into local systems for end-users. Importantly, integrating AI is not a one-time task but an ongoing process—as workflows evolve and as new data systems come online (or legacy systems are retired 👏🏻), the AI integration needs maintenance and adjustment. This points to the need for interdisciplinary teams (IT, clinicians, data scientists, workflow engineers) and Centers of Excellence (COE) working together to implement and sustain AI in practice. It’s a challenge of organizational change management as much as technology. Healthcare organizations that invest in training staff on new AI tools, soliciting feedback, and iteratively improving the workflow fit will see much better uptake. In summary, making AI a frictionless part of the daily routine—nearly invisible except for the improved outcomes it produces—is a crucial technical and human factor goal.
Access to Data for Training and Fine-Tuning AI
Another significant barrier is access to sufficiently rich data for developing and fine-tuning AI models. Modern AI, especially deep learning, typically requires vast amounts of example data to achieve high performance. While large health systems collectively generate plenty of data, individual organizations may have only a slice of what’s needed for a robust AI model—particularly for less common conditions or diverse patient populations. AI models trained on one hospital’s data might not generalize well to another’s patients due to demographic and practice differences. Ideally, training data should be aggregated from multiple sources to improve diversity and volume. However, as discussed, data sharing between institutions is fraught with privacy and interoperability issues. Many AI developers (including startups and academic researchers) struggle to obtain datasets that are both substantial and legally accessible. Fine-tuning an existing AI model on a local organization’s data is often recommended to adapt it to that setting, but obtaining even internal data for such purposes can require lengthy approvals and technical work to extract and prepare it.
There are emerging strategies to tackle data access challenges. One approach is federated learning, which allows AI models to be trained across multiple institutions’ data without the data ever leaving each institution. In this approach, a common model is sent to each site, locally trained on that site’s patient data, and only the learned parameters (not raw data) are sent back and aggregated to form a more robust global model. This technique can mitigate privacy concerns and overcome silos, albeit with technical complexity. Another strategy is creating centralized data collaboratives or exchanges where organizations contribute de-identified patient data to a pooled resource for AI development. For example, the National Institutes of Health and other agencies have funded large research databases and challenges to provide AI researchers with access to diverse, representative health datasets. Synthetic data generation is also being explored—using AI to create artificial patient records that statistically mirror real data without exposing actual patient information. While not a perfect substitute, synthetic data can assist with model training and testing when real data is scarce or sensitive.
Healthcare leaders are starting to recognize that data sharing is key to unlocking AI’s value. The latest HHS AI strategy emphasizes “democratizing AI technologies and resources” by promoting multi-institutional data partnerships and interoperability standards to improve data access for AI innovation (Kagan et al., 2025). In practice, some pioneering health systems have partnered with technology companies to build secure data hubs that enable controlled sharing and AI model development (see the Mayo Clinic case study below). These solutions show that it is possible to respect privacy while still leveraging the combined power of data across organizations. By investing in such collaborative infrastructures and frameworks, the healthcare industry can ensure that AI models are trained on broad, high-quality data—leading to more accurate and equitable tools. Until these practices are widespread, however, limited data access will continue to be a hurdle, especially for smaller providers and early-stage innovators in AI.
Continuous Validation and Performance Monitoring
Finally, assuming an AI model is successfully built and integrated, healthcare organizations face the ongoing challenge of validation and performance monitoring. An algorithm’s accuracy and safety are not static qualities; they must be continuously assessed in the real world. Changes in patient populations, clinical protocols, or data collection processes can all impact an AI model’s performance over time—a phenomenon known as model drift. Without periodic re-validation, an AI that initially passed testing could become less reliable or even unsafe. For instance, an AI model for detecting a certain condition might perform well during its initial trial, but if the hospital later changes how it records a key vital sign (new devices or different charting methods), the model’s inputs distribution might shift, degrading its output accuracy. Routine monitoring can catch these issues early. Unfortunately, many healthcare organizations do not yet have established procedures for AI performance surveillance post-deployment. In a recent review of AI implementation barriers, researchers noted a “lack of high-quality evidence and data to support AI tools” in clinical settings, indicating that rigorous validation is often insufficient (Ahmed et al., 2023). Hospitals are rightly cautious; they want to see proof that an AI actually improves outcomes in their environment, not just in academic studies or vendor demonstrations.
Setting up a strong validation framework involves both technical and operational components. Technically, it means measuring the AI’s key metrics (accuracy, false positive/negative rates, decision response times, etc.) on an ongoing basis, and ideally, comparing patient outcomes with and without the AI’s guidance. It may involve running the AI in a shadow mode initially (where it makes predictions in the background, not affecting care, until confidence in its performance is gained). Performance monitoring dashboards can track how the AI is doing and flag anomalies—such as a sudden increase in error rates—which could indicate an underlying data or system change. From an operational standpoint, organizations should designate responsibility for AI oversight, much like quality assurance for any medical device or procedure. This might be a new role for clinical AI officers or a committee that includes clinicians, data scientists, and IT leaders. Additionally, there should be protocols for updating or retraining models when needed. Some AI systems might even be designed to learn continuously, but this requires guardrails to ensure any new learning is carefully evaluated (the FDA is exploring regulatory paradigms for such adaptive algorithms).
Validation also ties back into transparency and trust: when clinicians see that an AI system’s performance is being diligently tracked and that it consistently proves its worth (or is promptly fixed when it falters), their trust in the tool solidifies. Conversely, if an AI is deployed and then “left alone,” staff may grow uneasy or lose confidence, especially at the first sign of a mistake. Therefore, making ongoing validation part of the lifecycle of AI in healthcare is essential for sustained adoption. It’s not a one-and-done project, but a continuous quality improvement process. Ensuring that AI predictions remain accurate, fair, and clinically relevant over time will maximize their positive impact on patient care. This continuous oversight completes the chain of overcoming data barriers: from ensuring good data goes in, to verifying that good information (and outcomes) come out.
The following case studies highlight how two healthcare organizations have tackled these data-related barriers to successfully implement AI solutions.
Case Studies
Mayo Clinic, a large U.S. health system, recognized that leveraging AI at scale required overcoming data silos and strict privacy requirements. In 2019, Mayo entered a 10-year partnership with Google to create a novel cloud-based data infrastructure. Central to this is the Mayo Clinic Platform, a secure data enclave that houses de-identified subsets of Mayo’s clinical data. External researchers and AI developers can bring algorithms into the enclave to train and test them on Mayo’s data, but the patient data itself never leaves the secure cloud environment (Greene et al., 2022).
This “data under glass” approach is a form of federated learning that addresses multiple barriers; it harmonizes data in one cloud platform, breaks down internal silos, and maintains stringent privacy and security by keeping protected health information on Mayo’s controlled cloud servers. The platform also supports interoperability standards, allowing collaborators to supplement Mayo’s data with their own. By building this ecosystem, Mayo Clinic has enabled AI development on a scale previously impractical under HIPAA constraints. Early outcomes include more rapid development of AI models for cardiovascular and radiation therapy applications, drawing on diverse data without compromising patient confidentiality. Mayo’s case illustrates how an innovative data governance and infrastructure solution can unlock AI potential—through partnerships and technology—while overcoming silos, ensuring compliance, cybersecurity, and fostering collaboration.
HCA Healthcare, one of the nation’s largest hospital networks, exemplifies how unified data and integration into workflow can drive AI success. HCA developed an algorithm called Sepsis Prediction and Optimization of Therapy (SPOT) to detect early signs of sepsis, a life-threatening infection, across its 180+ hospitals. SPOT was trained on an enormous dataset — HCA’s centralized data warehouse encompassing 31 million patient encounters annually — which gave it exceptional sensitivity and accuracy.
The system continuously monitors live data from EHRs, including vitals, lab results, and nursing notes, for every inpatient in real time. When SPOT’s AI model predicts a patient is at risk of sepsis, it triggers an alert to the care team within the existing workflow (integrated into the EHR and nurses’ notification system). This seamless integration means clinicians are informed up to 18 hours earlier than they might detect sepsis on their own, without needing to check a separate system. The impact has been striking: over the initial deployment, HCA’s sepsis AI was credited with saving more than 5,500 lives by enabling timely treatment. Key to this success was HCA’s investment in a modern data platform that unified formerly siloed data streams and the strong governance to standardize data across its network. By pairing a massive, high-quality dataset with real-time interoperability and workflow integration, HCA overcame many technical barriers—showcasing how AI can reliably augment clinicians when the data foundation is solid. SPOT’s results have since spurred confidence in AI tools among HCA’s clinicians, and the organization is expanding similar AI-driven decision support to other areas of care (Slabodkin, 2018).
Conclusion
AI’s promise in healthcare is real, but so are the technical hurdles. As we have discussed, data quality, accessibility, and governance issues often stand in the way of effective AI adoption, along with challenges in interoperability, legacy infrastructure, privacy/security, and the need for transparency and integration into workflows. Overcoming these barriers is a complex endeavor that requires strategic planning, cross-disciplinary expertise, and often significant investments in technology and process redesign. Healthcare organizations do not have to navigate this journey alone. Engaging experienced partners—such as Mesh Digital LLC, a management consulting firm specializing in digital transformation—can accelerate the path to success.
Mesh Digital LLC can assist healthcare providers in developing a comprehensive data and AI strategy that addresses each of the barriers outlined above. This includes defining various strategies, building business cases, developing technical solutions, as well as steering organizational change management to ensure adoption. Key ways a consulting partner can help include:
- Data Quality Improvement & Harmonization: Auditing existing data for gaps or errors, implementing data cleaning processes, and establishing master data management practices. Consultants can help introduce standard terminologies and map data from disparate sources into a unified schema, ensuring AI models train on consistent, high-quality data.
- Integrating Data Silos & Enhancing Interoperability: Designing and deploying interoperability solutions such as health information exchanges, FHIR-based APIs, or centralized data lakes that break down silos. Firms like Mesh Digital can recommend the right architecture (cloud or hybrid) to connect EHRs, lab systems, and other databases, providing AI systems seamless access to the breadth of data they need.
- Legacy System Modernization: Evaluating legacy IT systems and creating roadmaps for modernization or integration. This might involve migrating data and analytics workloads to secure cloud platforms, implementing middleware to interface legacy systems with new AI tools, or consolidating redundant systems. Upgrading infrastructure improves scalability for big data and AI while reducing maintenance burdens.
- Data Governance and Compliance Frameworks: Establishing robust data governance structures that define data ownership, stewardship, and standard operating procedures for data use. Consultants can develop policies and training for HIPAA/HITECH compliance, patient consent management, and ethical AI use. This ensures that any AI initiative has a strong foundation in privacy, security, and regulatory compliance, mitigating risk from day one.
- Cybersecurity Enhancement: Conducting security risk assessments and strengthening defenses around health data and AI systems. This includes deploying advanced cybersecurity tools, setting up monitoring for unusual data access or algorithm behavior, and planning incident response specifically for scenarios involving AI (e.g., detecting bias or tampering). Building patient and stakeholder trust is far easier when strong security measures are visibly in place.
- AI Explainability and Transparency: Guiding the selection or development of AI models that can provide interpretable outputs. Consulting experts can implement dashboards or documentation that clearly communicate an AI’s decision factors to clinicians and executives. Mesh Digital LLC, for instance, emphasizes “explainable AI” in its projects, helping clients choose algorithms that align with clinicians’ need for understanding and developing user interfaces that present AI insights in a clear, evidence-backed manner.
- Workflow Integration and Organizational Change Management: Working closely with clinical teams to integrate AI tools into existing workflows with minimal disruption. This might involve customizing EHR interfaces to incorporate AI alerts, configuring the timing and recipients of AI notifications to fit clinical schedules, and running pilot simulations to fine-tune the process. Additionally, consultants provide change management support—engaging clinicians early, gathering feedback, and iterating on the solution. They can train staff on new tools and help cultivate a culture that is receptive to data-driven decision support, addressing fears and highlighting benefits for patient care.
- Ongoing Monitoring and Improvement: Setting up governance for continuous AI performance monitoring and maintenance. A consulting partner can help implement key performance indicators and an AI oversight committee within the organization. They might also assist in establishing a cadence for model re-validation, updates, and re-training as new data becomes available or as objectives change. This ensures the AI solutions remain effective and aligned with clinical goals over the long term.
The call to action for healthcare executives and technology leaders is clear: invest in your data foundations and expert partnerships now to unlock AI’s transformative power. Technical barriers—though daunting—can be systematically addressed with the right strategy and support. Mesh Digital LLC stands ready to be a catalyst on this journey. By conducting thorough assessments and crafting tailored roadmaps, we help healthcare organizations modernize their data ecosystems, navigate compliance, and implement AI solutions that are both innovative and practical. The result is a healthcare system that can fully leverage data as a strategic asset: improving patient outcomes, operational efficiency, and care experiences through the intelligent use of AI.
The time to act is now: by strengthening data foundations today, healthcare providers will be poised to harness AI tomorrow, driving better health outcomes and greater value for all stakeholders in the U.S. Healthcare system.
References
- Ahmed, M. I., Spooner, B., Isherwood, J., Lane, M., Orrock, E., & Dennison, A. (2023). A systematic review of the barriers to the implementation of artificial intelligence in Healthcare. Cureus. https://doi.org/10.7759/cureus.46454
- Deo, S. (2024, February 20). A look at 2023 data breaches reported to the HHS OCR. 24by7security. https://blog.24by7security.com/a-look-at-2023-data-breaches-reported-to-the-hhs-ocr#:~:text=By%20contrast%2C%20just%20the%20year,9%20million
- Kong, H.-J. (2019). Managing unstructured big data in healthcare system. Healthcare Informatics Research, 25(1), 1–2. https://doi.org/10.4258/hir.2019.25.1.1​:contentReference[oaicite:25]{index=25}
- Eastburn, J., Fowkes, J., & Kellner, K. (2024a, June 7). Digital Transformation: Health Systems’ investment priorities. McKinsey & Company. https://www.mckinsey.com/industries/healthcare/our-insights/digital-transformation-health-systems-investment-priorities#:~:text=Respondents%20called%20out%20challenges%20with,become%20a%20challenge%20to%20untangle
- Eastburn, J., Fowkes, J., & Kellner, K. (2024b, June 7). Digital Transformation: Health Systems’ investment priorities. McKinsey & Company. https://www.mckinsey.com/industries/healthcare/our-insights/digital-transformation-health-systems-investment-priorities#:~:text=Additional%20highly%20ranked%20challenges%20include,34%20percent
- Gabriel, M. H., Richwine, C., Strawley, C., Barke, W., & Everson, J. (2024, May). Interoperable exchange of patient health information among U.S. hospitals: 2023. Interoperable Exchange of Patient Health Information Among U.S. Hospitals: 2023. https://www.healthit.gov/data/data-briefs/interoperable-exchange-patient-health-information-among-us-hospitals-2023#:~:text=sometimes%20in%202023.%20,resourced%20counterparts
- Greene, S. M., Ahmed, M., Chua, P. S., & Grossmann, C. (Eds.). (2022). Case study: Mayo-google partnership. Sharing Health Data: The Why, the Will, and the Way Forward. https://www.ncbi.nlm.nih.gov/books/NBK594445/#:~:text=federated%20learning%20model,data%20needed%20for%20algorithmic%20development
- Hassan, M., Kushniruk, A., & Borycki, E. (2024a, August 29). Barriers to and facilitators of Artificial Intelligence Adoption in health care: Scoping review. JMIR human factors. https://pmc.ncbi.nlm.nih.gov/articles/PMC11393514/#:~:text=example%2C%20Baxter%20et%20al%20,For%20other%20types
- Hassan, M., Kushniruk, A., & Borycki, E. (2024c, August 29). Barriers to and facilitators of Artificial Intelligence Adoption in health care: Scoping review. JMIR human factors. https://pmc.ncbi.nlm.nih.gov/articles/PMC11393514/#:~:text=opportunities%20to%20apply%20AI%20to,by%202025
- Kagan, D., Yoon, S., Tobey, D., Carr, A., & Kung, K. (2025, January 17). HHS releases AI Strategic Plan: Key Takeaways for Businesses. HHS releases AI Strategic Plan: Key takeaways for businesses. https://www.dlapiper.com/en/insights/publications/2025/01/hhs-releases-ai-strategic-plan#:~:text=Additionally%2C%20HHS%20highlighted%20concerns%20with,from%20misuse%20of%20predictive%20analytics
- Kong, H.-J. (2019, January). Managing unstructured big data in healthcare system. Healthcare informatics research. https://pmc.ncbi.nlm.nih.gov/articles/PMC6372467/#:~:text=The%20big%20problem%20of%20healthcare,unstructured%20big%20data%20in%20healthcare
- Office of the National Coordinator for Health Information Technology (ONC), The Office of the National Coordinator for Health Information Technology (ONC) Announces Special Emphasis Notice (SEN) Interest in Projects to Develop Innovative Ways to Evaluate and Improve the Quality of Healthcare Data Used by Artificial Intelligence (AI) tools in Healthcare and Accelerate Adoption of Health Information Technology in Behavioral Health (2024). Office of the National Coordinator for Health Information Technology (ONC), U.S. Department of Health and Human Services. Retrieved March 3, 2025, from https://www.healthit.gov/sites/default/files/page/2024-05/LEAP%20FY2024%20SEN_508.pdf#:~:text=reach%20users%2C%20such%20as%20clinicians%2C,%E2%80%9D6.
Notice Number: NAP-AX-22-001 - Slabodkin, G. (Ed.). (2018, November 19). HCA saves more than 5,500 lives with sepsis monitoring algorithms. https://www.healthdatamanagement.com/articles/hca-saves-more-than-5500-lives-with-sepsis-monitoring-algorithms?id=1502#:~:text=“Using%20data%20science%20to%20examine,a%20Senate%20committee%20on%20Wednesday
- Team, K. (2024, December 9). Ai in Healthcare Statistics: 62 findings from 18 research reports. AI in healthcare statistics: 62 findings from 18 research reports. https://www.keragon.com/blog/ai-in-healthcare-statistics#:~:text=match%20at%20L175%20⚪%EF%B8%8F%2089,and%20how%20it%20was%20sourced
- U.S. Department of Health & Human Services (HHS). (2023). HHS Artificial Intelligence (AI) Strategic Plan. Washington, DC: HHS Office of the Chief Information Officer. Retrieved from HealthIT.govdlapiper.comdlapiper.com
- Wolters Kluwer Health. (2023, July 11). Generative AI in Healthcare: Gaining Consumer Trust (Survey Report). Retrieved from Wolters Kluwer Newsroomwolterskluwer.com