From Pilots to Policy: Lessons from California’s Consent Journey
Why every new consent effort risks repeating the same mistakes.
California has long been the nation’s testbed for privacy innovation. From the constitutional right to privacy to medical privacy laws that predate HIPAA, the state has consistently led in developing rigorous standards and granting individuals greater control over how their data is shared. Yet when it comes to health information, California keeps running the same experiment and expecting different results.
For nearly twenty years, the state has been “modernizing” consent, moving from opt-in versus opt-out debates to electronic forms to today’s push for granular consent. Each effort promises progress, yet each ultimately collapses under the same weight: complexity without clarity.
At the same time, California continues to layer new privacy and data-sharing laws, each carrying its own consent nuances, exceptions, and implementation hurdles. This stems partly from the decision to treat HIPAA as a floor, not a ceiling, and partly from the state’s unharmonized web of internal statutes. By demanding stricter, more explicit patient authorization for certain disclosures and requiring separate forms for many transactions, California ensured its laws would always be “more protective” but also more fragmented. Protection on paper does not always translate to clarity in practice.
Instead of a coherent consent approach, we’ve built a patchwork that confuses providers, overwhelms compliance teams, and leaves patients no closer to understanding who can see their data or why. This fragmentation sits atop an already tangled federal landscape—HIPAA, 42 CFR Part 2, FERPA, and a dozen other laws that define privacy differently depending on the context. The result is a system so burdened by overlapping protections that it paradoxically erodes trust.
Now, with the Data Exchange Framework (DxF) transitioning to the Department of Health Care Access and Information (HCAI) under AB 660, California stands at another crossroads. After decades of forms, pilots, and white papers, we still lack a consent model that people understand, providers can operationalize, and systems can reliably enforce.
For more than a decade, I’ve helped California agencies, providers, and vendors wrestle with one deceptively simple question: how do we honor patient consent in a connected health system? As I argued in The Consent Conundrum: Why New Promises of Patient Control May Fall Short, the ambition to give patients complete control often collides with the technical and operational limits of the system itself. California’s experience proves the point.
This isn’t a story of failure so much as repetition. And it raises a difficult but necessary question: what if the problem isn’t that we haven’t found the right model of consent, but that we’ve been trying to solve the wrong problem all along?
What I’ve Seen and Why I Care
My work has always focused on translating policy ambitions into practical, usable strategies. A cornerstone of that work has been advancing consent-to-share policies and workflows in California and beyond.
From 2011 to 2014, I led the Health Information Exchange (HIE) Consent Demonstration Project with the California Health and Human Services Agency under the federal HIE Cooperative Agreement Grant. In those early days of exchange, the central policy question was simple: should consent be opt-in or opt-out? The project culminated in a report to the Legislature—the state’s first comprehensive evaluation of electronic consent for health information exchange—and produced a lasting lesson: technology can transmit data, but only governance can sustain trust.
Since then, I’ve designed consent frameworks and forms, advised the early State Health Information Guidance (SHIG) initiative, and helped build data governance and data sharing policies across California’s health and social service sectors. I also served as a Privacy subject matter expert for the early development of the Healthcare Payments Database at HCAI, where I saw the value of disciplined governance and transparent stewardship. HCAI didn’t just collect data; it built trust around its use.
Those years in the trenches taught me that consent succeeds or fails in the workflow, not in the policy memo. Yet despite everything we’ve learned, California keeps returning to the same debates, as if each new generation believes it is starting from scratch.
A History of Good Intentions
California’s journey toward modern consent began with genuine optimism. Each initiative sought to balance privacy with data sharing. Each left valuable insights and the same unresolved tensions.
The HIE Demonstration Project: Meaningful Choice, Not a Granularity Trial
When the project launched, California set out to evaluate how consent could work in practice, focusing on the opt-in versus opt-out debate. Granular, data-element-level control was the aspirational goal, and like many, I believed in that vision. We imagined a future where patients could decide, with precision, what to share and with whom.
The 2014 report concluded that patients “must be provided an opportunity to make a meaningful choice” about data sharing. Patients wanted control and data segregation, but we also found that “current technology does not support granular patient choice or segregation of data elements.” Looking back, I still agree with the spirit of those findings. But experience has clarified what we couldn’t see then: the gap between aspiration and usability. Granularity remains valuable in limited contexts—such as behavioral health or 42 CFR Part 2 data—but treating it as the foundation rather than the exception has hindered progress. Patients rarely use highly granular controls, and they often create interoperability and workflow barriers.
The real challenge is not building a switch for every data element, but defining a trust framework that allows sharing by default for care, while giving patients a simple, transparent way to opt out altogether. That experience also reinforced a principle I still hold: technology should support policy decisions, not drive them.
The ASCMI Pilot: Old Tools for New Problems
The Department of Health Care Services Authorization to Share Confidential Medi-Cal Information (ASCMI) began as a pilot to create a single, standardized tool for capturing an individual’s consent for real-time data sharing and care coordination. The effort is now in its third iteration, with the most recent form update issued in August 2025. The initiative represents an important evolution in California’s consent journey, but its mechanics remain dated.
The process still relies on paper-era infrastructure and static workflows. The result is a system that appears modern yet depends on manual, unsustainable processes. As I wrote in 2014, “the paper-based process is not scalable.” More than a decade later, that statement remains true.
At over six pages, the ASCMI form is lengthy and dense, a barrier to true patient comprehension. In my own work developing consolidated consent forms, I found that usability improves dramatically when forms are concise, ideally two or three pages at most. The longer and more complex the form, the less likely patients are to understand or feel ownership of their decision. It also increases the burden on clinicians and front-line staff who must explain the form in real time. For a policy tool built on informed choice, length and complexity are not neutral; they are design flaws that undermine usability.
ASCMI shows promise where local implementers are empowered to adapt, but it falters when rigid compliance replaces human judgment. Technology can evolve, but without human-centered design, comprehension and trust do not.
The Modernization Wave: New Acronyms, Old Problems
California is now awash in new frameworks and renewed efforts promising to “modernize” consent: computable consent, cross-sector frameworks, consent utilities.
The Stewards of Change Institute (SOCI) has proposed a statewide Consent Service Utility, while the Sequoia Project’s 2025 Landscape Review envisions computable consent expressed in code and executed by machines. Computable consent itself isn’t new. More than a decade ago, SAMHSA and ONC piloted Data Segmentation for Privacy (DS4P) to tag and transmit sensitive information, such as substance use disorder data protected under 42 CFR Part 2. DS4P established the technical foundation for what today’s reports rebrand as computable consent. These new ideas are imaginative, but they echo the same barriers identified in 2014: fragmented systems, inconsistent privacy interpretations, and weak governance.
Nationally, the same story is playing out under ONC’s HTI rule and TEFCA’s QHIN governance framework, where technical ambition again outpaces operational governance. California is not alone in trying to engineer trust through technology rather than build it through stewardship.
Why We Keep Repeating Ourselves
After almost two decades of pilots and reinventions, the patterns are clear. California’s struggle with consent isn’t only technical; it’s cultural and institutional. We repeat the same mistakes because the drivers of policy haven’t changed.
Policy Turnover: The Amnesia Problem
The data sharing policy community is small but constantly shifting. Leadership changes, contracts end, and grant-funded initiatives close before their lessons can be absorbed. Each new team arrives eager to innovate, often recreating what already exists without realizing it. When I re-read recent reports, I see echoes of our 2014 findings on nearly every page. The issue isn’t lack of originality; we simply keep forgetting what we’ve already learned—a kind of policy amnesia that erases progress.
Short Funding Cycles: Pilots Without Permanence
Most of California’s consent policy work is driven by time-limited grants that reward pilots over permanence. Projects end before they can mature, producing tools and forms that vanish once funding runs out. By the time an initiative shows promise, the contract ends, a new grant opens, and the ecosystem resets. That isn’t modernization; it’s motion without progress.
Technology-Led Agendas: When Functionality Outruns Governance
Perhaps the most persistent problem is the belief that technology will solve what governance has not. From DS4P to computable consent to the latest utilities, we continue to build tools faster than we define the rules to guide them. When technology drives policy, we end up optimizing functionality instead of accountability. Innovation matters but when it becomes the proxy for trust, we lose sight of the people and processes that make consent meaningful.
A Better Path Forward: The Three Pillars of Usable Consent
California doesn’t need another consent framework. It needs one that works.
After twenty years of watching policies falter under real-world conditions, I’ve come to believe that the problem isn’t conceptual but architectural. We’ve built consent systems that impress in pilot decks but fail at the front desk. To move forward, we must design for human environments where time is short, resources are limited, and decisions happen under pressure. After years of watching what fails and what endures, I’ve found that success always comes down to three things: usability, accountability, and governance — what I call the three pillars of consent.
Pillar 1: Usability
Consent should be simple, intuitive, and accessible. It must meet people where they are, in language and format they understand, and fit naturally into provider workflows. When patients understand what they sign and staff can explain it confidently, consent becomes communication, not bureaucracy. Comprehension builds trust.
Pillar 2: Accountability
Every consent decision, whether paper or electronic, needs a clear line of responsibility for how it is recorded, shared, and honored. Accountability means transparency and traceability. When organizations know who owns each part of the process, data stewardship becomes a shared commitment rather than a compliance checkbox. Accountability turns trust into action.
Pillar 3: Governance
Governance provides the structure that outlasts projects, contracts, and leadership changes. It defines who sets the rules, how they evolve, and how they are enforced. It keeps systems consistent when people and technology change. It transforms consent from documentation into stewardship.
This model doesn’t require new technology. It requires leadership willing to prioritize usability, oversight, and lived experience over theoretical innovation.
Where California Can Lead Next
California now has a rare opportunity to end the pilot era. With HCAI’s leadership, the Data Exchange Framework (DxF) can evolve from aspiration to sustainability—balancing innovation with inclusion, privacy with practicality, and ambition with trust.
When I helped HCAI design privacy protocols for the Healthcare Payments Database, I saw what success looks like: a governance-first approach built on clear authority, transparent use, and accountability for both privacy and performance. That is what makes AB 660 such an important inflection point. By transferring DxF oversight to HCAI, the state has placed consent under an entity with the statutory authority, operational experience, and public-trust mandate to finally get it right.
The DxF Roadmap explicitly identifies consent policy as a future area of governance. That makes HCAI’s role even more consequential: the agency is inheriting not just the DxF, but California’s decades-long struggle to make consent both workable and trusted. HCAI has begun statewide listening sessions, engaging providers, community organizations, and patients to understand what’s working and what’s not. It’s an encouraging start but this moment demands focus and discipline. An open approach must not be derailed by the same inertia that has slowed progress for years. HCAI’s success will depend on learning from lived experience—from front-line implementers, clinicians, and patients themselves—not just consultants or framework authors.
If HCAI can hold that line, the DxF can become a living system of consent built on usability, accountability, and governance. The goal is not to code consent into a platform but to embed it into practice. Technology can help, but it must follow policy, not define it. That principle has guided my work for more than a decade, and it remains the foundation for trustworthy data exchange.
Applying Lessons Already Learned
California has been running the same experiment for nearly twenty years—refining forms, piloting frameworks, and writing reports that all circle the same truth: consent is not a technology problem. It is a trust problem. And trust cannot be coded.
I have spent much of my career inside this experiment—designing consent forms, implementing data sharing frameworks, and advising agencies striving to balance innovation with privacy. I’ve seen the patterns repeat, but I’ve also seen progress: a growing recognition that consent must be usable, accountable, and governed to last.
The lessons aren’t new. We’ve already learned them through failed pilots, frustrated providers, and patients who simply stopped saying “yes.” The challenge now is to remember them because the stakes aren’t just technical; they’re human. Every patient who hesitates to share data out of confusion or mistrust represents a lost opportunity for better care.
With HCAI at the helm, California has a narrow but powerful window to prove that consent can be both meaningful and manageable, that simplicity and stewardship can coexist. If we do it right, we won’t just fix consent; we’ll rebuild confidence in the system that depends on it.