Sep 26, 2024
Windsor-based technology services firm Cooperative Systems is using artificial intelligence to automate repeatable tasks for its clients and document customer interactions.  In New Haven, health startup Anthogen developed an AI tool for patients with chronic conditions that monitors vitals and sends reminders — to drink a glass of water, or stretch — to help avoid flare-ups. And Cheshire marketing consultancy Rebellion Group uses AI to analyze customer decision-making, combing through online review language to understand the words and tone that appeal to individual users. Connecticut companies in nearly every industry are deploying artificial intelligence across an expanding range of applications, using the technology to develop speedier, more consistent and better-tailored products and services. At the same time, state and federal lawmakers are scrambling to ensure the safety and security of AI as its use cases grow as rapidly as the data troves they’re learning from. “The key is taking a risk-based approach,” said Sen. James Maroney, D-Milford, who co-chaired a task force and co-sponsored state legislation addressing AI during the General Assembly’s 2024 session. “When we’re using AI to make critical decisions about someone’s life, we need to make sure it’s been tested to be safe.” Those “critical decisions” could include things like human resources algorithms that cull certain resumes based on biased criteria, Maroney said, or housing applications that get denied due to inaccurate or outdated data. “If these decisions are being made about people, and we’re not giving them the ability to know why and to correct the data when it’s wrong, they may continue to get denied important opportunities,” he said. Cheshire marketing consultancy Rebellion Group uses AI to understand customer behavior, examining the language that drives their decision-making rather than targeting users based on demographic characteristics. Credit: Shahrzad Rasekh / CT Mirror Senate Bill 2, which passed the Senate earlier this year but failed to come up for a vote in the Connecticut House of Representatives before the session adjourned, would have established rules protecting consumers from discrimination, requiring companies to document their processes, conduct risk assessments, include labels and generally disclose to their customers how AI is being applied. It also would have made it a crime to deceptively use AI in election-related content or to disseminate AI-generated nude images. Opponents of the bill, including Gov. Ned Lamont, expressed concern that the regulations could impede innovation. Many suggested AI should be governed by federal rules, so as not to create piecemeal legislation across states.  Maroney, who is planning to introduce legislation again next session, said he’s participating in a working group with lawmakers from 46 other states to develop AI standards in an effort to avoid that “patchwork of disparate laws.” But for the moment, the landscape is far from uniform.  Colorado adopted legislation this year that closely mirrors Connecticut’s proposed law. And California Gov. Gavin Newsom is set to decide whether to sign a sweeping bill lawmakers passed this summer establishing the strictest state regulations so far for AI. Broadly, over 30 states took varied actions related to governing AI this year.  At the federal level, U.S. Sen. Richard Blumenthal, D-Conn., and Senate Majority Leader Chuck Schumer, D-N.Y., have each led efforts to draft a “framework” for regulating AI. And the White House laid out standards for AI in an executive order last year.  Maroney said he thinks there may have been a lack of understanding of what S.B. 2 was aiming to do, and he’s been meeting with companies since the session adjourned to explain the intent of his proposed regulations. He cited a recent study by Boston Consulting Group that found most business leaders are waiting for regulatory clarity before fully deploying their AI strategies.  “They want clear rules of the road,” he said. “So the longer we wait, we’re actually preventing full adoption.” A human in the loop Researchers and businesses studying and deploying AI point out that the technology isn’t new.  Machine learning, which processes large amounts of data and makes predictions, dates back to last century. Robotic process automation, which includes the kinds of services Cooperative Systems provides for its clients, has been in development for decades.  But recent calls for stricter regulation of AI came alongside the advent of what’s known as generative AI, which produces human-like output — text, speech or video, for example — in response to prompts. Popular genAI applications, such as OpenAI’s ChatGPT and Anthropic’s Claude, process source material from all over the internet in generating its responses, which can result in biased or inaccurate outputs. “That’s what the public is so fascinated by, and that is really an enormous computational leap,” said Lee Schwamm, who leads digital strategy at the Yale School of Medicine and serves as chief digital health officer for Yale New Haven Health.  Schwamm said automating rule-based repetitive tasks and deploying machine learning to process large data sets and offer predictions can be relatively safe activities. “In those circumstances, you want a human in the loop, but you can’t really proceed without a human in the loop,” he said. “Where you really need a human in the loop, and where you could potentially proceed without one, is when you have generative AI creating the substance of whatever that output is. And I think that’s a real challenge.” That can lead to inequities like algorithms offering different mortgage rates for customers based on their race or fake advertisements targeting voters in certain demographic groups and leading to suppression.  Schwamm said “reasonably enforceable” standards for equity are a good idea, as is some legal structure around liability that would establish who’s responsible if an algorithm doesn’t work properly. And he said malicious activity seeking to influence elections, bully or manipulate people or harm someone’s reputation should be criminalized.  “You want to create safeguards, but you don’t want to inhibit the learning arc here, which is steep,” he said. Still, the best way to lay out those safeguards in legislation is a topic of fierce debate. Employees at Rebellion Group discuss goals during a meeting. Credit: Shahrzad Rasekh / CT Mirror Steve Shwartz, an AI researcher, patent holder, entrepreneur and author who has been working in the field since the 1980s, called Maroney’s bill “awful” and said he was “thrilled” that Gov. Lamont indicated he wouldn’t sign it. The legislation didn’t address many of the main concerns people have about AI, such as autonomous vehicles, he said, and it would have placed too big a burden on the kinds of startup ventures Connecticut is trying to court. “In an effort to prevent discrimination, the bill required companies to disclose information about training data” (i.e., the source input), Shwartz said. But most companies deploying generative AI are using apps like ChatGPT or Claude, which don’t disclose this information. “My concern is that this requirement would prevent companies from using these third-party tools,” thus stifling innovation, he said.  Shwartz is also a proponent of establishing rules at the federal level, rather than state-by-state. He said Schumer’s framework, as well as a “Blueprint for an AI Bill of Rights” put forth by the White House two years ago, both made sense to him.  “Why are the states getting involved?” Shwartz said. “Imagine a startup company. You’re trying to survive, you’re trying to make payroll. And to release your technology, you’ve got to go through 50 states [with] 700 page documents each, and you’ve got to make sure you’re not in violation of any of those rules. “Imagine how much legal help you’d need for that.” Within the gray space Economic development leaders in Connecticut are trying to cultivate a friendly climate for companies pioneering technologies like AI.  Companies in sectors ranging from technology services and health care to insurance, logistics, automotive, engineering, legal, education and even social services are working AI into their operations. Even Bristol-based ESPN recently launched AI-generated game recaps. Connecticut Innovations, the state’s quasi-public venture capital arm, said over a third of the startups it has funded are currently using AI in meaningful ways, and it expects that figure to rise to 100% within the next few years. And the Department of Economic and Community Development recently announced a $100 million grant program aimed at establishing AI and quantum computing hubs in the state. Business leaders largely acknowledge the need for some regulatory guardrails to ensure the safety and equity of AI applications. (Connecticut company leaders and experts in AI appear to be predominantly white and male.) Still, many applications don’t reach the level of risk Maroney’s legislation is concerned with, because their AI applications draw conclusions and produce outputs based on a closed loop of data the user feeds it. Daniel Nadis, founder and chief executive of Anthogen, said his company takes great care to secure patients’ information. “For us, the issue is about the person whose data it is having control over their data,” he said. “That’s the main thing, the security and reliability of the data that’s going in and coming out of these platforms,” said Cooperative Systems President Scott Spatz.  In emailed comments, Chris Nocera, chief AI officer at Rebellion Group, said, “Because AI models are only as effective as the data they are trained on, and because the data they are trained on is frequently from our own human decisions and behavior, safeguards and open discussions are necessary.” Chris Nocera, chief AI officer at Rebellion Group. Nocera and a team of data scientists and strategists worked with Virtuoso and 535 to develop an AI tool known as “Krakin.” The tool “reveals deep psychological consumer insights,” Nocera said in a press release. Credit: Shahrzad Rasekh / CT Mirror Maroney expects the legislature to take up his AI legislation again during the 2025 session. By then, there should be some evidence from Colorado of how its law — which closely resembles Connecticut’s S.B. 2 — has played out in practice. And there could be new regulations in California, if Gov. Newsom signs the bill on his desk, leading to broad changes nationally.  Maroney said he’s been working on a new draft of his bill, which he intends to make public next month. “When industry and government work together to develop regulations, it’s actually pro innovation,” he said. Nocera, who said he has spoken with the senator about the Connecticut legislation, encouraged a flexible approach rather than “one size fits all” — because there’s already such a wide range of AI applications in use and in development.  “While taking a hardline position of either stifling caution or overzealous ambition is an easy line to walk, it is far more difficult and important to proceed and thrive within the gray space between,” he wrote. “A flexible and collaborative approach involving legislators, technologists, AI developers and ethicists is essential.”
Respond, make new discussions, see other discussions and customize your news...

To add this website to your home screen:

1. Tap tutorialsPoint

2. Select 'Add to Home screen' or 'Install app'.

3. Follow the on-scrren instructions.

Feedback
FAQ
Privacy Policy
Terms of Service