New York’s Wrong State of Mind for AI

Juan Londoño

March 12, 2026

Across the country, state lawmakers are introducing terrible artificial intelligence (AI) bills that pose all kinds of constitutional and compliance issues. However, New York holds the sorry distinction of having some of the worst legislation to date.

 On March 4th, New York’s S7263, a bill that would impose liability on AI developers when their models give advice on regulated professions (such as law or medicine) advanced out of Senate Committee. The bill seeks to prevent the use of AI chatbots to obtain advice that would typically be given by licensed professionals, such as lawyers, doctors, accountants, or therapists. While ostensibly written to protect users from making potentially life-altering decisions, the bill ultimately restricts users’ access to valuable information that they otherwise would not have access to. NY lawmakers should reject this misguided legislation and embrace a light-touch approach to AI regulation.

The bill aims to protect users from making the kinds of ill-informed decisions that have garnered media attention, such as one AI user who decided to contravene their attorney’s advice, fire them, and attempt to self-represent with AI. But, in its attempt to curb some of AI’s most infamous misuses, it would also greatly limit AI’s ability to improve New Yorkers’ lives. Every day, Empire State residents use AI services before seeking professional advice to make sure that hiring a lawyer, accountant, or therapist is even the right decision to make. It’s very difficult to know when a second opinion—or no further opinion—is warranted, and existing websites and blogs can offer incomplete or patchy information. Restricting the ability of users to leverage AI for answers would be especially harmful for lower-income users, who cannot afford to pay for professional services and often opt to forgo them.

In a sign that suggests the bill prioritizes protecting licensed professionals over user safety, the bill undermines the guardrails AI companies have put in place to protect users. For example, the bill explicitly establishes that the use of disclosure notices by an AI agent would not excuse the company from legally liability for model-derived content. In other words, even if AI companies try to notify users of the models’ limitations or remind them that the model is an imperfect replacement for a professional’s services, the bill would still hold the company liable.

Ultimately, some of the behaviors that the bill is attempting to address will not disappear even if AI were magically removed from the equation. Individuals who overestimate their capacity to outperform professionals’ services have existed and will continue to exist whether they use AI or not. These DIYers have relied on books, encyclopedias, search engines, and other methods to conduct their oft-inadequate research and argumentation.

However, the imbalance of knowledge between consumers and professions ordinarily makes it very difficult for the former to properly vet the latter. Mostly because customers do not even know what they do not know. AI has the potential to change this dynamic. Search engines function as a librarian, requiring users to know exactly what to ask the engine to find what they are looking for. AI, on the other hand, works as a tutor, guiding and contextualizing users when they venture into heaps of unknown material. With AI, users have the capacity to better understand a doctor’s treatment plan, evaluate a lawyer’s advice, or quickly evaluate the market rate of a contractor’s quote.

New York’s S7263 would deprive New Yorkers of this valuable benefit. While discussions over appropriate AI safeguards in high-stakes situations are worth having, this bill seems to focus more on protecting professionals’ paychecks than helping users. New Yorkers deserve access to high-quality information before making potentially life-altering decisions.