Navigating AI Policy While Encouraging Innovation: Heinz Alumna Jutta Williams Proposes a Path Forward
By Jennifer Monahan
But first, some backstory:
Williams was an information security expert with years of experience in the healthcare industry. She had worked extensively on legislation related to tech privacy and security. She had been invited to the White House, to meet with policy leaders, and to represent the American Medical Association at the Office for Civil Rights. The result?
“After all that time, after hundreds of thousands of dollars and many visits to Washington, D.C., I was ultimately able to prevent one bad rule going from ‘proposed’ to ‘final,’” Williams said.
Now, back to that pivotal moment in the job interview with the tech industry executive:
Williams asked the individual about how they would handle a situation where there was a conflict in the privacy laws between two countries; the person responded, “I think about what I'd want for me, and for my kids, and my family, and then we do that, and we'll pay a penalty everywhere else.”
Williams remembers thinking, I worked so hard to change one rule for 300 million people, and this person can just decide for ten billion people what the right thing is.
“Now, no single person should have that much power – and there is actually slightly more oversight than that,” Williams said. “But in that moment, I knew I was going to work for big tech because I could affect more change there than I could from the outside.”
Regulation VS. Innovation
The key question facing lawmakers right now – the reason for all the recent Congressional hearings with tech leaders and faculty experts sitting across the room from Senators, talking about artificial intelligence (AI) and what policies are needed to keep society safe – is how to create guardrails for AI without squashing innovation or putting the United States at a strategic disadvantage in the world.The range of roles Williams has played in government service, healthcare, and tech – she has worked everywhere from the U.S. Department of State to Facebook (now Meta), Twitter (now X), Google, and Reddit – has given her unique insight into and perspective on how best to navigate that tension between regulation and innovation.
The Challenges for Lawmakers
Creating policy is no easy task. Technology evolves much more rapidly than any policy could possibly be adjusted to address each new iteration or breakthrough.“Pace is the biggest enemy,” Williams explained. “We haven’t overcome the policy debt from the last two cycles of technological innovation, especially from a security and data-protection perspective.”
Even the European Union’s General Data Protection Regulation (GDPR) fails to adequately address some of the privacy and security risks around AI, Williams said, and U.S. policy trails behind that.
Another challenge is that enforcement of regulations can be ineffective. The rules are too often vague or subject to interpretation, with financial penalties that might cost an organization money but fail to change industry practices.
“We’ve been chasing this ideal of making management accountable to address risk,” Williams said, “but the outcome is not making people safer.”
The Challenges for Tech Leaders
For tech leaders, the challenges are different but equally daunting.“There’s a clear and obvious skills gap,” Williams said. “It’s expensive to hire anybody with the requisite technology skills because we have a scaling problem.”
While universities are producing impressive technological research, most have not pivoted to creating a workforce that can apply their research in a way that results in concrete outcomes, improvements, or benefits.
That workforce need is the reason Williams and her business partner Dr. Rumman Chowdhury co-founded Humane-Intelligence.org, a non-profit that helps AI business owners bring products to market. They focus on safety and ethics, particularly with generative AI. They also started a for-profit company, BiasBounty.ai, that helps companies manage risk detection and mitigation, information security, and the ethics of machine learning (ML) and AI.
High-Stakes Choices
Regardless of whether the lens is that of a lawmaker, an industry leader, an innovator, or a regular citizen, society has an interest in getting AI policy right – and some of the challenges are inherent in the AI itself.“AI is irreversible,” Williams explained. Once the models are out in the world, they’re hard to rein back in.
She cites an example from her time at Twitter of an AI algorithm used on many social media platforms – not just Twitter – that assists with image cropping.
The AI model had been trained on college campuses, mostly by computer science majors. That demographic is skewed towards males between the ages of 18 and 25. The model tracked participants’ eye movement; wherever participants looked at the photo, the algorithm was triggered to identify interest. The result was that photos cropped using that model tended to display women from neck to navel.
That algorithm was leveraged for all kinds of use cases in surveillance systems and for cropping on social media platforms before anyone realized the issue.
“The danger of a prolific reuse of models and algorithms is that you don’t know why or how they’re trained – and they’re not often retrained,” Williams explained, “so they just live there in perpetuity.”
In addition to being immutable, AI has the potential for widespread impact on the workforce.
While AI may not replace every job currently performed by a human, technology is already affecting how people do their work. CMU’s Block Center for Technology and Society’s Future of Work Initiative examines the way technology such as machine learning, AI, autonomous vehicles, and other disruptive technologies will influence the labor market.
Williams believes one inherent challenge is that AI is currently being applied without consideration for how appropriate its use may – or may not – be.
“We haven’t trained people to question ‘Why?’ as part of the application of AI,” Williams said. She expects the situation to improve over time as people realize that humans are better suited than AI for some tasks and the novelty of AI wears off.
Though she is optimistic about the future of AI, Williams is also a realist. One of her passions is ensuring that the workforce is able to adapt.
“We really need to make sure people can level up their skills so they don’t become irrelevant,” Williams said.
The Path Forward
Her experience with the International Organization for Standardization (ISO), an independent, non-governmental organization based in Switzerland, has shaped Williams’s perspective. In 2018, Williams served as head of the U.S. delegation to ISO SC 42; she led the technical advisory group for the U.S. and served on ISO’s sub-committee for artificial intelligence.During her tenure, the group established a new ISO organization for AI standards. The ISO standards provide the underpinnings for many voluntary certifications and certification programs, including security and privacy.
Williams believes industry standards, rather than restrictive legislation, are the way to make AI work for society as a whole.
From the commercial industry perspective, Williams explained, companies signal that they have effective security practices by earning a certificate through a third-party audit. Those certifications provide a level of credibility beyond a company’s claims of compliance or adherence to the requirements of law.
“Standards take a long time to build, and they’re built from industry consortium,” Williams said. ISO’s American delegation to the AI organization included academic institutions, an array of companies, and some government agencies. Groups collaborate and then make recommendations on behalf of the U.S. Those ideas are brought to international plenary sessions, where each country gets a vote and the standard is chosen. The process works. It’s why an ethernet cable functions in both the U.S. and Europe, and why you can use your phone when you travel across international borders.
For AI, the process is still underway.
“In those early days, one of the hardest debates within that community was just to define what artificial intelligence means internationally,” Williams said.
The next step is to create risk frameworks and use cases for testing and evaluation. Those elements might be different for industrial AI technologies versus healthcare-related technologies.
“Over time, that process yields a very robust set of technical requirements, and it stays in perpetual improvement and maintenance mode, which laws can’t and won’t,” Williams said. “In my experience, standards work is much more valuable than policy work.”
Industry Compliance
Williams also advocates for internal compliance. She has served as a compliance officer in tech companies and believes in the value of having a senior leader on the team who is aligned with something other than top-line metrics and stock value.That said, external governance plays an important role in preventing abuse. The trick is creating a policy that balances protecting society with nurturing innovation.
“Effective policy is hard and restrictive policy really does have a cooling effect on innovation. And right now the world is in a race with what can be achieved through technology," Williams said. "You don't want to cool innovation too much.”
Williams cited the Biometric Information Privacy Act (BIPA), passed by the Illinois legislature in 2008, as an example of policy that illustrates the inherent trade-offs.
“BIPA completely changed facial recognition and biometrics because it had a private ‘right to action’ for people whose data were used without consent,” Williams said. “We do need laws, and we do need to have enforceability. But it’s hard to make innovative choices when you have a threat of civil action.”
Tech company CEOs set the policy to govern their companies, and consequently wield significant power.
“All of these companies have voluntarily extended civil liberties and rights based on what's good for their bottom line, their reputational risks,” Williams said. “I've had conversations with some of these very senior people who say, 'Okay, pencils down. What would we want for ourselves? Do we want to have these rights for ourselves? Yes. Okay, then let's extend them to the world and not just to people who live in a GDPR space.'”
While their intentions may be noble, tech leaders also face intense pressure to make sure their company’s stock prices continuously go up. The need to grow the market, grow the company, and ultimately grow the stock value can work against the desire for ethical decision-making. That tension is the reason government oversight is necessary.
“Governance should be very top of mind when crafting policy,” Williams said. “We need to look at threat modeling and think about the abusive cases and the unintended, downstream consequences.”
Preparing for Jobs That Don’t Yet Exist
Williams was an early participant in Heinz College’s Information Security Policy and Management (MSISPM) program, and she credits her experience there with helping her think about her work differently.
Half the jobs I've held didn't exist when I was in school.Jutta Williams
While the domain of information security and privacy is now well established, Williams said the field was in its nascent state during her time at Carnegie Mellon.
Heinz was ahead of the adoption curve for some of the technological changes that became institutions; privacy was born there.Jutta Williams
Williams was a graduate assistant at CMU’s Software Engineering Institute (SEI), CERT Division. The SEI is a Federally Funded Research and Development Center (FFRDC) that researches complex software engineering, cybersecurity, and AI engineering problems; creates and tests innovative technologies; and transitions maturing solutions into practice by partnering with the U.S. Department of Defense, U.S. government, law enforcement, industry, and academia to improve the resilience of computer systems and respond to sophisticated cybersecurity threats.
“There was a lot of research focus in my courses, but taking the applied classes and getting to do hands-on work at CERT really helped prepare me in practical ways,” Williams said.
Because of her own experience, Williams is determined to support others, particularly women, in their professional journeys.
Above: Jutta Williams and her husband Cary Williams with Sunny, the honorary MSISPM class dog, enjoying a beautiful fall day on the CMU campus during Jutta's time at Heinz College.
Giving Back
Williams received a scholarship for half her tuition to pursue graduate studies at Heinz College. She recalled her answer to an application question that she believed earned her the support.
“I wrote that in almost 10 years, I had never worked with a woman as a peer. I've been in crypto programs and done all this DOD defense work, and I work with women, but not as an engineering peer,” Williams said. “That’s sad.”
Though she never felt that she lacked opportunity because of her gender, Williams said it was sometimes lonely as a woman working in tech. She is happy to see those trends starting to shift. Even so, Williams said it can be hard for recent graduates to find that first job. And she wants to pay forward the help she has received.
“People gave me a leg up,” Williams said. “I've had advocates and mentors in my life who helped me make it. And the CMU brand has opened doors for me, so of course I’m going to give back.”