Karen Ostrowski Karen Ostrowski

The Limits of Consumer-Mediated Access

As digital health systems scale and automate, trust cannot rest solely on individual choice. This post explores the limits of consumer-mediated access and why institutional responsibility still matters.

Why patient choice can’t carry the full burden of trust in digital health


In moments of trust failure, the digital health ecosystem often reaches for a familiar explanation: we didn’t give patients enough control.

It’s an intuitively appealing idea. If individuals could decide who accesses their data, many of today’s privacy and trust concerns would dissolve. Put patients in charge, and trust will follow.

This framing is gaining traction precisely because it feels empowering and values-aligned. But as health data systems grow more complex, more automated, and more outcome-shaping, consumer-mediated access is increasingly being asked to carry more weight than it can reasonably bear. What’s emerging is not just an emphasis on patient agency, but a quiet substitution: consumer control standing in for institutional trust. The result is a narrative that emphasizes individual choice while leaving institutional responsibility largely untouched.

The appeal of consumer-mediated access is easy to understand. It’s shorthand for a model where trust is mediated through individual permission: patients decide who gets access to their data, rather than institutions making those decisions on their behalf. In theory, this approach treats consent as the primary mechanism for trust, rather than one component of a broader governance framework.

High-profile data controversies, interoperability disputes, and growing unease about AI have created demand for simple, reassuring answers. “Putting patients in control” offers one. It aligns with consumer-centric language, API-based access models, and modern product narratives that emphasize choice and empowerment.

It also conveniently sidesteps harder questions.

Framing trust as a function of individual consent allows institutions to point downstream toward user decisions rather than upstream at governance, accountability, and system design. Responsibility shifts without ever being fully resolved. In that sense, consumer control functions as a comfort story: it acknowledges concern without requiring structural restraint.

None of this makes consumer-mediated access wrong. But it does explain why it’s being asked to solve problems it was never designed to address.

Where Consumer-Mediated Access Falls Short

Control over access is not the same as control over use, inference, or impact. Authorizing a data exchange does not determine how information is combined, analyzed, or acted upon once it moves beyond the point of access and into broader systems of reuse and decision-making.

That mismatch is not a failure of patient engagement; it reflects the structural limits of consent when it is asked to stand in for governance.

In practice, patients are being asked to make informed choices in environments that are:

  • technically complex and difficult to interpret

  • opaque in how data is combined, reused, or enriched

  • increasingly automated, with decisions happening far beyond the original transaction

Even well-designed consent flows struggle to account for what happens after authorization. Information asymmetry is real. Data literacy is uneven. Consent fatigue is well documented. And when harm occurs, enforcement mechanisms are often slow, fragmented, or unavailable to individuals altogether.

The result is a model that places cognitive and moral burden on people without giving them meaningful leverage over outcomes.

Those limits become even more pronounced as data moves through interoperable, AI-enabled systems. Interoperability accelerates flow, but AI transforms data into predictions, classifications, and risk scores that persist well beyond the moment of access. Once generated, those inferences may be reused, propagated, or acted upon in ways that are difficult, if not impossible, for individuals to trace or reverse.

Consumer-mediated access models were designed to manage discrete access decisions, not the downstream consequences that follow as data is reused, combined, and acted upon. They were not designed to manage downstream inference, secondary use, or cumulative harm. Yet they are increasingly treated as sufficient guardrails in systems where consequences are distributed, durable, and hard to attribute.

In that context, consent alone cannot do the work we’re assigning to it. Consent signals permission, but it does not establish limits on downstream use, govern how inferences are generated or applied, or provide mechanisms for accountability when harm occurs, especially in systems that operate continuously and at scale.

Consumer-mediated access still plays an important role in modern digital health systems. It supports transparency, enables agency, and creates pathways for engagement that did not previously exist. These are meaningful advances.

But consumer control cannot bear the full weight of trust on its own.

When access decisions are treated as the primary safeguard, responsibility shifts downstream. Trust becomes something individuals are expected to manage through better choices, rather than something institutions earn through restraint, clarity, and accountability. In practice, this obscures the need for upstream governance, particularly in systems where individuals cannot realistically see, contest, or influence how their data is used once it moves beyond the point of access.

Durable trust in digital health is built through systems that take responsibility for data use and AI deployment, especially where automated decision-making and secondary use are involved. That requires:

  • clear institutional accountability for how data is used and reused

  • governance frameworks that define not just who can access data, but which uses are permissible

  • enforcement mechanisms that operate at system speed, not only after harm occurs

  • metrics that measure real-world outcomes, not just procedural compliance

These are harder problems than designing consent flows. They require constraint as much as innovation, and a willingness to enforce it. But they are the conditions under which trust becomes durable rather than aspirational.

Trust cannot be outsourced to individuals navigating systems they did not design and cannot meaningfully constrain. As systems scale and automate, the question is no longer whether patients can choose wisely, but whether institutions are prepared to govern responsibly.

Consumer-mediated access remains an important tool, but it cannot carry the full burden of trust in complex, automated systems.

Read More
Karen Ostrowski Karen Ostrowski

The Work Between Policy and Technology

Cross-sector systems often break down not because of technology or policy alone, but because the work of translating between the two goes unnamed.

Why policy–technology alignment determines whether systems actually work


Back in 2018, I was tasked with conducting something called “policy-technology alignment” as part of California’s Alameda County Whole Person Care pilot. At the time, I didn’t fully understand what that meant. I recently found a note in an old notebook that literally read, “what does this mean?”, written sincerely, not rhetorically.

I understood health care policy and the data sharing landscape reasonably well. I did not yet understand technology in the same way. And at that point, most health information exchange was still centered on clinical, HIPAA-regulated data. California’s Whole Person Care pilots (the precursor to CalAIM) were pushing into new territory: cross-sector care coordination that brought together health care providers, housing organizations, behavioral health and substance use treatment providers, justice-involved systems, community based organizations, and others that had not historically shared data or infrastructure.

The use case was clear. Counties were being asked to coordinate care for Medi-Cal beneficiaries with complex needs by breaking down silos across systems. What was far less clear was how to design technology and workflows that could support that coordination without violating the law, undermining trust, or defaulting to overly restrictive approaches that defeated the program’s purpose.

That ambiguity is exactly where policy-technology alignment shows up in practice.

At a basic level, policy-technology alignment is the work of translating policy requirements and intent into technical and operational decisions—how systems are designed, configured, and used in practice. It involves understanding not just what the law says, but how it is meant to function in real-world programs, and then ensuring that system design reflects those realities. This work is distinct from compliance alone. Compliance asks whether something is permitted. Policy–technology alignment asks how permission, restriction, consent, and accountability are actually implemented in systems that people have to use.

In the Whole Person Care context, this translation problem surfaced immediately. Data from different sectors came with different legal regimes, different expectations around consent, and varying institutional norms. HIPAA was only one piece of the puzzle. Housing data, social services data, behavioral health information, and justice-related data did not fit neatly into a single framework, yet the technology was expected to bring them together into shared care plans, coordinated workflows, and common operating pictures.

The core question I kept encountering was how to integrate these disparate data types into a shared environment without running afoul of the law or eroding trust among partners.

Practically, that often meant grappling with how to move from a strictly HIPAA environment to a hybrid one. It required careful interpretation of overlapping laws, a clear understanding of where flexibility existed, and deliberate choices about how consent, access controls, and data segmentation would be handled in the system over time, not just at go-live.

Policy–technology alignment, in that context, involved several interrelated activities. It meant interpreting policy intent rather than relying solely on literal readings. It meant distinguishing hard constraints from areas where programs had discretion. It meant translating legal and policy concepts into system logic: permissions, workflows, data flows, and user roles. And it meant anticipating how implementation choices would shape behavior over time, especially as systems scaled or partnerships evolved.

As this work expanded to other jurisdictions—Marin and Santa Cruz counties and the City of Sacramento—it became clear that these challenges were not unique to a single pilot or county. The same dynamics repeated themselves. New partners entered the ecosystem with varying levels of experience handling sensitive data. Legal counsel, often deeply familiar with HIPAA but less comfortable with broader cross-sector data sharing, approached system design from a place of understandable risk aversion. Program leaders, meanwhile, were focused on achieving outcomes that depended on timely, coordinated access to information across organizations.

Technology sat in the middle of these competing pressures. Systems were expected to operationalize highly nuanced agreements, layered consent models, and complex regulatory interpretations. When that nuance was flattened or misunderstood, the technology made decisions by default. Those defaults—who could see what, under what circumstances, and for what purposes—effectively became policy in practice.

Looking back, I didn’t set out to build a practice around policy–technology alignment. I kept encountering the same problems, regardless of geography or organizational structure. The work lived in a translation layer that was essential but rarely named, and often poorly understood. Sometimes it showed up in policy or legal teams. Sometimes in product or implementation. In technology companies, it even surfaced in sales contexts, where teams were asked to explain how a product fit within complex and evolving policy environments. In some of those settings, the work was described as “policy–product fit.” The terminology varied, but the underlying challenge remained consistent.

Over time, it became clear how often this work is treated as incidental rather than foundational. When policy–technology alignment is not done deliberately, systems tend to be technically compliant but operationally fragile. Tools struggle to gain adoption because they do not reflect how programs actually function. Data sharing efforts stall because consent, governance, and trust were never fully translated into system design.

As health and social care systems become more interconnected, the consequences of these failures become more pronounced. Cross-sector data sharing, secondary data use, and emerging technologies increase the stakes of design decisions that once seemed minor. Policy–technology alignment is no longer a niche concern. It is a core requirement for building systems that are lawful, usable, and worthy of trust.

The work has existed for some time. The question is whether we are prepared to recognize it as a distinct discipline—one that sits between policy, technology, and practice—and approach it with the rigor it demands. As policy expectations continue to outpace technical clarity, this translation work will only become more consequential.

Read More
Karen Ostrowski Karen Ostrowski

Health Care in 2025: A System at a Crossroads

A system-level reflection on the forces shaping U.S. health care in 2025 — including economic pressure, safety-net shifts, and evolving questions of trust — and what they may mean heading into 2026.

2025 has not been defined by incremental policy change. It has been a year that continues to reveal how fragile the U.S. health care system becomes when economic pressure, institutional restructuring, and public mistrust converge. While individual policies, budgets, and legislative actions matter, the deeper story of the year is structural: a system under strain, operating with less financial margin, less institutional stability, and less public confidence than it has had in decades.

Across federal agencies, states, and communities, there is broad recognition that many parts of the health system are overly complex and burdened by layers of policy and bureaucracy built up over time. That recognition is not new. What has made 2025 distinct is how rapidly the system has been asked to absorb change — often without clear transition pathways — while economic and fiscal pressures limit the capacity to adapt.

This review looks beyond rulemaking and programmatic detail to examine what 2025 has revealed — and continues to reveal — about the health care system itself: its dependencies, its vulnerabilities, and the growing gap between policy ambition and lived experience.

Structural Shift: A Changing Federal Footprint

Through major legislation, budget decisions, and administrative restructuring, 2025 has brought significant changes to long-standing federal functions. While shifts in policy priorities are expected with changes in political leadership, the pace and scale of restructuring this year continue to shape how states, providers, and community organizations operate. But what has distinguished this moment is the speed of implementation — including rapid program changes, funding reductions, and reversals of prior policy direction — often unfolding faster than replacement structures or transition pathways could be established.

For states, providers, and community organizations, this has translated into reduced or delayed federal funding streams, less predictability about long-term program support, and greater responsibility for interpretation, implementation, and financial backfilling. Many stakeholders agree the system needs modernization and simplification. What has proven destabilizing is not the acknowledgment of brokenness, but the compression of change into a short window, leaving limited time for planning, coordination, or capacity-building.

The result is a system operating with fewer guardrails and greater uncertainty — particularly for programs that rely on consistent federal scaffolding to function effectively.

Economic Pressure, Public Budgets, and System Capacity

The economic backdrop of 2025 continues to shape nearly every aspect of health care delivery and policy. While market indicators often emphasize aggregate trends, the lived reality for most households — and for the institutions that serve them — remains one of sustained financial strain.

Wages continue to lag behind the cost of living, and household financial buffers have thinned. Economic insecurity is now a defining feature of daily life for many middle- and working-class families, influencing how people interact with the health system — from delaying care to rationing medications and navigating coverage under sustained financial pressure.

At the same time, state and local governments — which form the operational backbone of the U.S. health care system — face mounting fiscal pressure as reduced federal funding, delayed grants, and constrained budgets ripple through health departments, Medicaid agencies, and community-based organizations. In many jurisdictions, budgets have tightened just as demand for services has increased.

The consequences are visible and immediate: staffing constraints, delayed payments, reduced program capacity, and difficult tradeoffs about which services can be sustained. For many states and municipalities, this is not a question of efficiency or reform, but triage.

Because the U.S. health system relies so heavily on state and local execution, these fiscal pressures have outsized effects. Policy intent, no matter how well designed, becomes harder to translate into outcomes when the institutions responsible for implementation are financially strained.

Safety-Net Recalibration and the Changing Social Contract

Alongside fiscal constraint, 2025 has been marked by a recalibration of the nation’s safety-net programs — particularly Medicaid and related assistance programs — reflecting a broader shift in how responsibility, eligibility, and public support are framed.

Across multiple programs, policy changes increasingly emphasize individual accountability, work requirements, and tighter eligibility thresholds. For supporters, these shifts represent efforts to correct inefficiencies and refocus public assistance on defined outcomes. For states, providers, and community organizations, they introduce additional administrative complexity at a time when capacity is already strained.

Medicaid redeterminations have continued to push people off coverage, often not because of income changes but due to procedural barriers and churn. At the same time, rollbacks in flexibility for addressing social drivers of health have narrowed tools that states and communities had begun to rely on to stabilize high-need populations.

What distinguishes this moment is not simply the policy direction itself, but the context in which it is unfolding. These changes are occurring in an environment where public systems are expected to absorb more risk with fewer tools and less predictability.

Together, these shifts surface a deeper question that extends beyond any single program: what role the safety net is expected to play in an era of constrained public investment and heightened individual risk. For many stakeholders, 2025 has felt less like a recalibration at the margins and more like a renegotiation of the social contract itself.

Affordability and Access: A Pressurized Consumer Landscape

As economic and fiscal pressures mount, affordability and access have become the most visible and personal expressions of system strain.

Throughout 2025, the cost of health care has continued to rise across nearly every dimension — insurance premiums, deductibles, hospital services, and prescription drugs. For households already navigating economic instability, even modest increases carry real consequences. Decisions about care are increasingly shaped by financial tradeoffs, not just medical need.

Access has also become less predictable. Coverage feels less stable. Provider availability has narrowed in some regions due to staffing shortages and financial pressures. Administrative complexity has made navigating benefits feel increasingly opague.

Late-year uncertainty around ACA subsidies has reinforced a sense that affordability protections cannot be taken for granted. For many consumers, this uncertainty translates into hesitation — delaying care, skipping prescriptions, or disengaging from the system altogether.

These pressures are not evenly distributed, but their effects ripple broadly. Employers face higher costs, providers absorb growing uncompensated care, and safety-net systems stretch to cover gaps with fewer resources. Affordability and access are no longer abstract policy debates; they are daily stressors shaping how people experience — and trust — the health care system.

Public Health, Scientific Trust, and a Fractured Information Environment

Few parts of the system have felt the strain of recent changes more acutely than public health and medical science. Public health infrastructure continues to face significant strain, and a fragmented media environment has made it more difficult to maintain consistent, trusted communication with the public.

Skepticism toward vaccines, medical guidance, and scientific expertise has grown alongside economic insecurity and institutional strain. Competing narratives and misinformation — including from prominent public voices — have filled gaps left by reduced capacity and inconsistent messaging.

Trust is not a soft metric in health care; it is a prerequisite for effective prevention, public health response, and care delivery. In 2025, trust has eroded not because of a single failure, but because many people continue to experience the system as unpredictable, unaffordable, and opaque.

Technical Progress Amid Broader System Stress

Despite these pressures, progress continues in areas such as interoperability and data exchange. CMS has advanced efforts to streamline alignment across programs and models, TEFCA participation has expanded, and many organizations have invested in modern data infrastructure.

These developments are meaningful and necessary. Yet they are unfolding in a system where state capacity is strained, privacy governance remains uneven, and cross-sector initiatives face funding uncertainty.

The paradox of 2025 is this: technical capability is advancing while institutional stability weakens. Without stable funding, clear governance, and public trust, even the most sophisticated systems struggle to deliver on their promise.

The System-Level Picture: Fragmentation, Pressure, and the Search for Stability

Stepping back, the story of 2025 is not one of any single policy or administrative decision. It is the story of how a complex health care system responds when foundational structures shift faster than operational reality can absorb.

Several hard truths have come into sharper focus this year:

  • the health system’s deep reliance on consistent public investment

  • the finite nature of state and local capacity

  • the amplifying effect of economic instability on policy change

  • the centrality of affordability and access to public trust

  • trust itself as a form of infrastructure — when it weakens, outcomes suffer

Across conversations throughout the year — with state leaders, technology partners, health systems, payers, and community organizations — one theme has been consistent: the system feels stretched, and the path forward remains uncertain. Yet there is also a shared recognition that the pressures of 2025 have created an opportunity to confront long-standing challenges more directly.

As the system looks toward 2026, the need for stability stands out as clearly as the need for innovation. States and communities will require clearer policy direction and transition pathways. Markets and safety-net programs will need more predictable signals. Public health will depend on stronger foundations for communication and trust. And as data capabilities continue to advance, governance frameworks must keep pace in ways that are practical, transparent, and durable.

The opportunities are real. So are the constraints. The question for 2026 is whether the health system can move from reactive adaptation toward more intentional alignment — rebuilding coherence, capacity, and trust while navigating an environment that remains unsettled.

At Hawthorne Strategies, our work sits at this intersection — helping partners interpret structural shifts, navigate uncertainty, and build durable frameworks that support trust, collaboration, and better outcomes for the people and communities they serve.

Read More
Karen Ostrowski Karen Ostrowski

Closing the Gap: Analyzing Senator Cassidy's New Privacy Bill

Last week, Senator Bill Cassidy (R-LA) introduced the Health Information Privacy Reform Act (HIPRA), a bill designed to bring health data protections in line with how information is created and shared today. HIPAA was drafted in the mid-1990s, when most health information lived in doctors’ offices or insurer databases and was exchanged by fax or early electronic systems.

Today’s ecosystem looks very different. Health data now moves through mobile apps, wearables, AI platforms, and wellness devices that operate outside HIPAA and often lack clear privacy protections. Cassidy’s bill aims to bridge that gap by expanding rules, rights, and transparency across a much broader data landscape. It’s not a modernization of HIPAA, but it is intended to address long-standing, pervasive issues with our current patchwork of privacy laws and regulations.


Senator Bill Cassidy, MD, a physician by training, serves as Ranking Member of the Senate Health, Education, Labor & Pensions (HELP) Committee. In recent years he has been an active voice on digital-health and privacy issues. Notable efforts include:

  • In 2021 he raised concerns that data collected by smart devices like fitness trackers could be used to influence insurance coverage or reveal sensitive health conditions.

  • In February 2024 his office released a report recommending modernization of the HIPAA framework and stronger protections for health-related data not currently covered by the law.

  • He also introduced the DELETE Act, a bill aimed at giving individuals more control over data held by brokers, including the right to delete it.

This new bill builds on that record and positions Cassidy as one of the few lawmakers consistently focused on extending privacy protections beyond HIPAA’s original boundaries. And his background as a physician lends credibility to his focus on bridging clinical practice and digital privacy policy.


As I discussed in Part 3 of “The Digital Health Divide”, the existing HIPAA framework reflects the health care system at the time, not the system we have today. The lines between health and health-related data have blurred and information doesn’t just exist in EHRs or claims. Information about a person’s behavior, location, or device readings can be just as sensitive as their medical record, yet under current law, most of that information falls outside HIPAA’s reach. Cassidy’s bill acknowledges the gap and proposes extending a comparable set of privacy, security, and accountability standards to the rest of the health tech ecosystem.

The proposed bill would:

  • Expand who’s covered. It introduces a new category of regulated entities, e.g., health apps, wellness platforms, and data brokers, that handle identifiable health-related information but aren’t traditional HIPAA “covered entities.”

  • Add new individual rights. Beyond HIPAA’s existing access and amendment rights, the bill introduces the novel rights of data deletion and portability (the ability to easily move data between platforms), aligning with growing interest in interoperability and consumer empowerment.

  • Increase transparency. When data moves from a HIPAA-covered environment into a non-HIPAA one, receiving organizations would have to provide plain-language notice explaining that HIPAA protections no longer apply, similar to existing notice requirements under HIPAA.

  • Update technical standards. The Department of Health and Human Services (HHS), in consultation with the Federal Trade Commission (FTC), would issue new guidance on AI, machine-learning applications, and de-identification of health information.

  • Extend enforcement. The civil penalty structure would mirror HIPAA’s, but would apply to the broader set of entities under this bill, not just providers, payers and business associates.

HIPRA is not intended to replace HIPAA, nor is it a modernization of it; rather, they would run in parallel. HIPAA remains the governing law for health-care providers, health plans, and their business associates. HIPRA would apply to the consumer-tech and digital-health layer that now sits between patients and traditional care delivery. In practice, this could create a more unified baseline for privacy and security expectations, regardless of whether data originates in an electronic health record or on a smartwatch.

For organizations already implementing ONC’s TEFCA framework or FHIR-based APIs under the 21st Century Cures Act, HIPRA wouldn’t necessarily change the technical architecture, but it could redefine accountability for how those data connections are governed and communicated to consumers.

Significance and Potential Hurdles

For health care and digital health organizations, this bill signals three important shifts:

  1. The perimeter of accountability is widening. Organizations that once operated outside HIPAA may soon need comparable compliance programs. While many have adopted HIPAA as a best practice, this ups the stakes for those organizations.

  2. Data transparency will move upstream. Providers and payers offering patient-access tools will need to coordinate more closely with app developers to ensure individuals understand what happens once their data leaves a HIPAA environment.

  3. Policy is catching up to interoperability. After years of focusing on technical exchange, this bill acknowledges the need for complementary privacy and governance standards, something that is long overdue.

While the intent is clear, organizations preparing for this change should note that several implementation issues are likely to arise if the bill moves forward:

  • Compliance Burden: Smaller digital health startups and data brokers that were previously unregulated will face a significant compliance lift, requiring them to implement HIPAA-like security and privacy programs.

  • Jurisdictional Conflicts: Defining the precise jurisdictional boundary between the Department of Health and Human Services (HHS) and the Federal Trade Commission (FTC) will be critical and complex, particularly where consumer apps are involved, potentially leading to regulatory confusion in the short term.

  • Definition creep: Determining what qualifies as a “regulated entity” could prove challenging. Without clear thresholds, even analytics vendors or cloud providers might fall within scope, risking over-extension of enforcement capacity.

HIPRA will also need to align with the FTC’s Health Breach Notification Rule and a growing landscape of state privacy laws (notably California and Washington). How effectively federal regulators harmonize these frameworks will determine whether HIPRA simplifies compliance or adds another layer to it.

And with much of Congress’s attention currently on ACA subsidies and the cost of care, it is unclear how much traction this bill will gain. But Cassidy’s continued push to update the nation’s privacy framework deserves attention.

The takeaway

HIPAA remains foundational, but it was never meant to govern the full spectrum of health data that exists today. HIPRA recognizes that health information no longer lives solely inside hospitals or claims systems; it follows people into their homes, their devices, and their daily routines.

Whether or not HIPRA advances, it signals where federal policy might be headed: toward a unified privacy baseline that extends beyond covered entities to the entire health-data supply chain. Organizations that anticipate that shift now will be better positioned when regulation catches up.

If enacted, the bill could help close the “HIPAA gap” by harmonizing privacy expectations across both regulated health care and the expanding digital health frontier. The real test, as always, will be in implementation: how agencies define regulated entities, align enforcement, and translate legislative intent into operational practice.

Read More
Karen Ostrowski Karen Ostrowski

From Pilots to Policy: Lessons from California’s Consent Journey

California has led the nation in privacy innovation — but decades of “modernizing” consent reveal a deeper challenge: complexity without clarity.

Why every new consent effort risks repeating the same mistakes.


California has long been the nation’s testbed for privacy innovation. From the constitutional right to privacy to medical privacy laws that predate HIPAA, the state has consistently led in developing rigorous standards and granting individuals greater control over how their data is shared. Yet when it comes to health information, California keeps running the same experiment and expecting different results.

For nearly twenty years, the state has been “modernizing” consent, moving from opt-in versus opt-out debates to electronic forms to today’s push for granular consent. Each effort promises progress, yet each ultimately collapses under the same weight: complexity without clarity.

At the same time, California continues to layer new privacy and data-sharing laws, each carrying its own consent nuances, exceptions, and implementation hurdles. This stems partly from the decision to treat HIPAA as a floor, not a ceiling, and partly from the state’s unharmonized web of internal statutes. By demanding stricter, more explicit patient authorization for certain disclosures and requiring separate forms for many transactions, California ensured its laws would always be “more protective” but also more fragmented. Protection on paper does not always translate to clarity in practice.

Instead of a coherent consent approach, we’ve built a patchwork that confuses providers, overwhelms compliance teams, and leaves patients no closer to understanding who can see their data or why. This fragmentation sits atop an already tangled federal landscape—HIPAA, 42 CFR Part 2, FERPA, and a dozen other laws that define privacy differently depending on the context. The result is a system so burdened by overlapping protections that it paradoxically erodes trust.

Now, with the Data Exchange Framework (DxF) transitioning to the Department of Health Care Access and Information (HCAI) under AB 660, California stands at another crossroads. After decades of forms, pilots, and white papers, we still lack a consent model that people understand, providers can operationalize, and systems can reliably enforce.

For more than a decade, I’ve helped California agencies, providers, and vendors wrestle with one deceptively simple question: how do we honor patient consent in a connected health system? As I argued in The Consent Conundrum: Why New Promises of Patient Control May Fall Short, the ambition to give patients complete control often collides with the technical and operational limits of the system itself. California’s experience proves the point.

This isn’t a story of failure so much as repetition. And it raises a difficult but necessary question: what if the problem isn’t that we haven’t found the right model of consent, but that we’ve been trying to solve the wrong problem all along?

What I’ve Seen and Why I Care

My work has always focused on translating policy ambitions into practical, usable strategies. A cornerstone of that work has been advancing consent-to-share policies and workflows in California and beyond.

From 2011 to 2014, I led the Health Information Exchange (HIE) Consent Demonstration Project with the California Health and Human Services Agency under the federal HIE Cooperative Agreement Grant. In those early days of exchange, the central policy question was simple: should consent be opt-in or opt-out? The project culminated in a report to the Legislature—the state’s first comprehensive evaluation of electronic consent for health information exchange—and produced a lasting lesson: technology can transmit data, but only governance can sustain trust.

Since then, I’ve designed consent frameworks and forms, advised the early State Health Information Guidance (SHIG) initiative, and helped build data governance and data sharing policies across California’s health and social service sectors. I also served as a Privacy subject matter expert for the early development of the Healthcare Payments Database at HCAI, where I saw the value of disciplined governance and transparent stewardship. HCAI didn’t just collect data; it built trust around its use.

Those years in the trenches taught me that consent succeeds or fails in the workflow, not in the policy memo. Yet despite everything we’ve learned, California keeps returning to the same debates, as if each new generation believes it is starting from scratch.

A History of Good Intentions

California’s journey toward modern consent began with genuine optimism. Each initiative sought to balance privacy with data sharing. Each left valuable insights and the same unresolved tensions.

The HIE Demonstration Project: Meaningful Choice, Not a Granularity Trial

When the project launched, California set out to evaluate how consent could work in practice, focusing on the opt-in versus opt-out debate. Granular, data-element-level control was the aspirational goal, and like many, I believed in that vision. We imagined a future where patients could decide, with precision, what to share and with whom.

The 2014 report concluded that patients “must be provided an opportunity to make a meaningful choice” about data sharing. Patients wanted control and data segregation, but we also found that “current technology does not support granular patient choice or segregation of data elements.” Looking back, I still agree with the spirit of those findings. But experience has clarified what we couldn’t see then: the gap between aspiration and usability. Granularity remains valuable in limited contexts—such as behavioral health or 42 CFR Part 2 data—but treating it as the foundation rather than the exception has hindered progress. Patients rarely use highly granular controls, and they often create interoperability and workflow barriers.

The real challenge is not building a switch for every data element, but defining a trust framework that allows sharing by default for care, while giving patients a simple, transparent way to opt out altogether. That experience also reinforced a principle I still hold: technology should support policy decisions, not drive them.

The ASCMI Pilot: Old Tools for New Problems

The Department of Health Care Services Authorization to Share Confidential Medi-Cal Information (ASCMI) began as a pilot to create a single, standardized tool for capturing an individual’s consent for real-time data sharing and care coordination. The effort is now in its third iteration, with the most recent form update issued in August 2025. The initiative represents an important evolution in California’s consent journey, but its mechanics remain dated.

The process still relies on paper-era infrastructure and static workflows. The result is a system that appears modern yet depends on manual, unsustainable processes. As I wrote in 2014, “the paper-based process is not scalable.” More than a decade later, that statement remains true.

At over six pages, the ASCMI form is lengthy and dense, a barrier to true patient comprehension. In my own work developing consolidated consent forms, I found that usability improves dramatically when forms are concise, ideally two or three pages at most. The longer and more complex the form, the less likely patients are to understand or feel ownership of their decision. It also increases the burden on clinicians and front-line staff who must explain the form in real time. For a policy tool built on informed choice, length and complexity are not neutral; they are design flaws that undermine usability.

ASCMI shows promise where local implementers are empowered to adapt, but it falters when rigid compliance replaces human judgment. Technology can evolve, but without human-centered design, comprehension and trust do not.

The Modernization Wave: New Acronyms, Old Problems

California is now awash in new frameworks and renewed efforts promising to “modernize” consent: computable consent, cross-sector frameworks, consent utilities.

The Stewards of Change Institute (SOCI) has proposed a statewide Consent Service Utility, while the Sequoia Project’s 2025 Landscape Review envisions computable consent expressed in code and executed by machines. Computable consent itself isn’t new. More than a decade ago, SAMHSA and ONC piloted Data Segmentation for Privacy (DS4P) to tag and transmit sensitive information, such as substance use disorder data protected under 42 CFR Part 2. DS4P established the technical foundation for what today’s reports rebrand as computable consent. These new ideas are imaginative, but they echo the same barriers identified in 2014: fragmented systems, inconsistent privacy interpretations, and weak governance.

Nationally, the same story is playing out under ONC’s HTI rule and TEFCA’s QHIN governance framework, where technical ambition again outpaces operational governance. California is not alone in trying to engineer trust through technology rather than build it through stewardship.

Why We Keep Repeating Ourselves

After almost two decades of pilots and reinventions, the patterns are clear. California’s struggle with consent isn’t only technical; it’s cultural and institutional. We repeat the same mistakes because the drivers of policy haven’t changed.

Policy Turnover: The Amnesia Problem
The data sharing policy community is small but constantly shifting. Leadership changes, contracts end, and grant-funded initiatives close before their lessons can be absorbed. Each new team arrives eager to innovate, often recreating what already exists without realizing it. When I re-read recent reports, I see echoes of our 2014 findings on nearly every page. The issue isn’t lack of originality; we simply keep forgetting what we’ve already learned—a kind of policy amnesia that erases progress.

Short Funding Cycles: Pilots Without Permanence
Most of California’s consent policy work is driven by time-limited grants that reward pilots over permanence. Projects end before they can mature, producing tools and forms that vanish once funding runs out. By the time an initiative shows promise, the contract ends, a new grant opens, and the ecosystem resets. That isn’t modernization; it’s motion without progress.

Technology-Led Agendas: When Functionality Outruns Governance
Perhaps the most persistent problem is the belief that technology will solve what governance has not. From DS4P to computable consent to the latest utilities, we continue to build tools faster than we define the rules to guide them. When technology drives policy, we end up optimizing functionality instead of accountability. Innovation matters but when it becomes the proxy for trust, we lose sight of the people and processes that make consent meaningful.

A Better Path Forward: The Three Pillars of Usable Consent

California doesn’t need another consent framework. It needs one that works.

After twenty years of watching policies falter under real-world conditions, I’ve come to believe that the problem isn’t conceptual but architectural. We’ve built consent systems that impress in pilot decks but fail at the front desk. To move forward, we must design for human environments where time is short, resources are limited, and decisions happen under pressure. After years of watching what fails and what endures, I’ve found that success always comes down to three things: usability, accountability, and governance — what I call the three pillars of consent.


Pillar 1: Usability
Consent should be simple, intuitive, and accessible. It must meet people where they are, in language and format they understand, and fit naturally into provider workflows. When patients understand what they sign and staff can explain it confidently, consent becomes communication, not bureaucracy. Comprehension builds trust.

Pillar 2: Accountability
Every consent decision, whether paper or electronic, needs a clear line of responsibility for how it is recorded, shared, and honored. Accountability means transparency and traceability. When organizations know who owns each part of the process, data stewardship becomes a shared commitment rather than a compliance checkbox. Accountability turns trust into action.

Pillar 3: Governance
Governance provides the structure that outlasts projects, contracts, and leadership changes. It defines who sets the rules, how they evolve, and how they are enforced. It keeps systems consistent when people and technology change. It transforms consent from documentation into stewardship.


This model doesn’t require new technology. It requires leadership willing to prioritize usability, oversight, and lived experience over theoretical innovation.

Where California Can Lead Next

California now has a rare opportunity to end the pilot era. With HCAI’s leadership, the Data Exchange Framework (DxF) can evolve from aspiration to sustainability—balancing innovation with inclusion, privacy with practicality, and ambition with trust.

When I helped HCAI design privacy protocols for the Healthcare Payments Database, I saw what success looks like: a governance-first approach built on clear authority, transparent use, and accountability for both privacy and performance. That is what makes AB 660 such an important inflection point. By transferring DxF oversight to HCAI, the state has placed consent under an entity with the statutory authority, operational experience, and public-trust mandate to finally get it right.

The DxF Roadmap explicitly identifies consent policy as a future area of governance. That makes HCAI’s role even more consequential: the agency is inheriting not just the DxF, but California’s decades-long struggle to make consent both workable and trusted. HCAI has begun statewide listening sessions, engaging providers, community organizations, and patients to understand what’s working and what’s not. It’s an encouraging start but this moment demands focus and discipline. An open approach must not be derailed by the same inertia that has slowed progress for years. HCAI’s success will depend on learning from lived experience—from front-line implementers, clinicians, and patients themselves—not just consultants or framework authors.

If HCAI can hold that line, the DxF can become a living system of consent built on usability, accountability, and governance. The goal is not to code consent into a platform but to embed it into practice. Technology can help, but it must follow policy, not define it. That principle has guided my work for more than a decade, and it remains the foundation for trustworthy data exchange.

Applying Lessons Already Learned

California has been running the same experiment for nearly twenty years—refining forms, piloting frameworks, and writing reports that all circle the same truth: consent is not a technology problem. It is a trust problem. And trust cannot be coded.

I have spent much of my career inside this experiment—designing consent forms, implementing data sharing frameworks, and advising agencies striving to balance innovation with privacy. I’ve seen the patterns repeat, but I’ve also seen progress: a growing recognition that consent must be usable, accountable, and governed to last.

The lessons aren’t new. We’ve already learned them through failed pilots, frustrated providers, and patients who simply stopped saying “yes.” The challenge now is to remember them because the stakes aren’t just technical; they’re human. Every patient who hesitates to share data out of confusion or mistrust represents a lost opportunity for better care.

With HCAI at the helm, California has a narrow but powerful window to prove that consent can be both meaningful and manageable, that simplicity and stewardship can coexist. If we do it right, we won’t just fix consent; we’ll rebuild confidence in the system that depends on it.

Read More
Karen Ostrowski Karen Ostrowski

Welcome to Hawthorne Strategies

An introduction to Hawthorne Strategies — why I launched this company, what it stands for, and the role writing plays in the journey.

I’ve spent my career helping others translate health policy into practice — guiding organizations, shaping frameworks, and trying to make a complicated system a little more human. But after years of leading inside other people’s structures, I realized it was time to build my own.

Hawthorne Strategies was born from that decision and from a deep desire to do this work differently. I wanted to create a consulting practice rooted in integrity, curiosity, and empathy; one that values understanding as much as expertise. Independence wasn’t just appealing, it was necessary. I needed the freedom to pursue the work that matters most, on my terms, and to model the kind of leadership I wish I’d seen more of in this field and throughout my career.

The name Hawthorne carries personal meaning. It comes from the street where my dad lived for forty years and the closest thing to “home” I’ve ever had. (My family is still there, but my dad passed away from a stroke in 2023). My dad embodied steadiness and quiet strength: a man who worked hard, did what was right, and always believed in doing your best even when no one was watching. Naming this company after him is my way of honoring that legacy. To me, Hawthorne evokes resilience and rootedness — dependable, grounded, and unpretentious. Those are the values I want reflected in every relationship I build.

Writing has always been how I make sense of the world. As a shy kid with a lot of change and emotion swirling around me, I found clarity through putting words on paper. That impulse never left. Over the years, writing became both my craft and my compass; a way to process complexity, to teach, and to connect. It’s where my analytical side meets my creative one.

Here, I’ll be sharing ideas that sit at the intersection of policy, technology, and trust — the forces that shape how health information moves and how people experience care. I’ll also occasionally write about the human side of this work: leadership, burnout, resilience, and what it means to build something of your own.

If you’ve seen any of my recent pieces such as my Digital Health Divide series or my consumer newsletter Make (H)IT Make Sense, you’ll recognize the throughline: the belief that progress only matters when people can trust the systems designed to serve them.

Thank you for being here at the beginning of this next chapter. I’m proud of what Hawthorne stands for and excited to see where this journey leads.

— Karen Ostrowski
CEO & Chief Policy Advisor, Hawthorne Strategies

Read More