A PYMNTS Company

Senators Demand Answers on AI and Health Data Privacy as Regulatory Gaps Widen 

 |  March 10, 2026

Every day, millions of Americans strap on fitness trackers, log their meals in nutrition apps, or upload their DNA to genetic testing sites. They do it voluntarily, usually without reading the fine print. And almost none of it is covered by the same privacy rules that govern their doctors.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Now, as artificial intelligence supercharges what companies can do with that data, a growing number of lawmakers are asking a pointed question: should it be?

    The issue came to a head in Washington last week, according to Nextgov/FCW, which reported on a Senate Health, Education, Labor and Pensions Committee hearing held last week. Senators from both parties raised concerns about the lack of federal guardrails around AI-powered health tools that operate outside the healthcare system and therefore outside the reach of HIPAA, the main U.S. law protecting medical privacy.

    The distinction is important. A hospital or your doctor’s office must follow strict rules about how your health information is stored, shared, and sold. But a fitness app you download or a wearable device you buy at a consumer electronics store? Different story. Those products do not have to play by the same rules even if they are collecting the same sensitive data.

    Committee chairman Sen. Bill Cassidy (R-LA) laid out the stakes in stark terms, raising the possibility that genetic data shared with a third-party AI tool could eventually be used to discriminate against a person’s family members in ways that skirt existing legal protections.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    “I do think there’s some consumer safeguards that should be implemented, like a box that pops up, ‘Your data uploaded will be boom-boom-boom, now accessible for marketing,’ unless you say not,” he said.

    Cassidy’s concern illustrates a gap that regulators have struggled with for years: laws written in a different era do not always map cleanly onto a world where AI can cross-reference and analyze data in ways that weren’t previously possible.

    The government’s witness at the hearing was Thomas Keane, the assistant secretary for technology policy at the Department of Health and Human Services. He pointed to an HHS request for information (RFI) issued in December that is gathering public input on how AI can be deployed safely in healthcare. Keane said the agency has received hundreds of responses so far, which will help guide future policy.

    Read more: Healthcare Fraud at Tembisa Hospital: R2 Billion Procurement Fraud Exposed

    But when pushed on whether HHS already has the authority to regulate what happens to patient data once a person voluntarily shares it with a third-party AI tool, Keane was blunt. The agency does not believe it can regulate data that patients have consented to release, he said. Within the government’s existing health data exchange network, known as TEFCA, there are rules about reselling data or sharing it with marketers but those protections end the moment a patient takes personal control of their own records.

    Sen. Angela Alsobrooks (D-MD) also pushed on whether current federal authorities are adequate, or whether Congress needs to act to ensure these tools are safe and transparent for both patients and clinicians.

    The HHS RFI is the most immediate thing to track. Keane told lawmakers the agency plans to share the results with Congress, which could serve as the foundation for either new regulations or legislation. Whether that process moves quickly enough to satisfy members of the HELP Committee, or whether lawmakers decide to push their own bills, remains to be seen.

    For now, the view from Capitol Hill is that the explosion of consumer AI health tools has outpaced the rules designed to protect people who use them. And with genetic data, mental health apps, and continuous biometric monitoring becoming part of everyday life, Congress appears increasingly unwilling to let that gap go unaddressed.