Artificial Intelligence
Supplied

As someone who regularly moves between Washington, DC, and Silicon Valley, I've seen firsthand how differently AI gets talked about in each place. In DC, the focus is often on regulation, unintended risks, and the need to control outcomes. Meanwhile, in Silicon Valley, it's about the boundless potential of AI to transform industries like productivity, automation, and data insights.

The truth is both conversations matter. But we need a unifying principle that puts consumers first. The debates often lose sight of the fact that real people -- not federal or state government employees or startup founders -- are the ones impacted by AI every day. Whether it's in healthcare, education, or the workforce, the user's experience -- or in translatable terms for lawmakers, an everyday American's experience -- needs to be front and center.

Look at the data: A 2023 Pew Research study showed that 68% of Americans worry about AI making decisions that affect their lives, from health insurance approvals to job screenings. In one audit of insurance algorithms, 20% of claim denials were linked to bias rather than legitimate medical reasons.

The issue is that policymakers often don't think about the user experience the way the tech industry does. The question shouldn't just be, "How do we regulate or deregulate AI?" but, "How do we protect and empower the people affected by AI?"

We need a shift. Policymakers should use the following principles:

* Real Transparency for Consumers: Too often, it's unclear who is responsible when AI systems fail. Companies should be required to clearly state when AI is involved in decision-making and who is accountable when harm occurs—without layers of unnecessary bureaucracy.

* Accountability and Real-World Impact: Before deployment, AI should be tested in real-world scenarios to understand how it impacts people. Take the example of a major retailer in 2022: their hiring algorithm excluded applicants with employment gaps, disproportionately disadvantaging caregivers. Real-world impact reviews could prevent such failures.

* Fairness and Consumer Protection: AI should work for everyone. Systems must be built and tested to ensure they don't unfairly deny access to services or opportunities based on irrelevant factors like zip codes or employment gaps.

* Consumer-Centered Innovation Standards: Innovation should prioritize societal well-being. This means encouraging AI tools that help solve genuine problems, like improving medical diagnostics or enhancing public education, while keeping safeguards in place to avoid harm.

If we get this balance right early, the stakes are enormous. McKinsey projects AI could contribute $13 trillion to the global economy by 2030—but only if it earns public trust.Yet, when AI is used well, the results can be impressive. The World Economic Forum said in advance of its Davos conference that by 2030, the world could see a net job increase of 78 million new jobs as companies begin to immerse themselves in discussions about worker retraining and upskilling.

We must do better to change the voices in this conversation and move past the vernacular of the past. For our children learning in classrooms with teachers and computers in the same classrooms. For our loved ones navigating critical medical decisions in the emergency room. For our workers who don't deserve to hear that their skills are no longer needed for the company. Now is the time to make sure AI puts people first.

Rohini Kosoglu is a venture partner at Fusion Fund, a venture firm focused on early-stage artificial intelligence, technology, and health care investments, and a Policy Fellow at the Stanford Institute for Human-Centered AI. Kosoglu is a leading national expert in domestic policy and a veteran of the White House and the U.S. Senate, most recently serving as deputy assistant to the president and domestic policy advisor to the vice president in the White House.