Perspective
Response to Politico's "When ‘red-teaming’ AI isn’t enough"
Austin Carson
|
October 26, 2023
|
Washington, D.C.

SeedAI founder and president, Austin Carson, sent the following to Politico reporter, Derek Robertson, following his story, "When ‘red-teaming’ AI isn’t enough":

As an organizer of the Generative Red Team (GRT) Challenge at DEF CON, I need to respond to your story, ‘When ‘red-teaming’ AI isn’t enough,” given some of the statements within it from the Data & Society report.

I’ll tell you the same thing I tell anyone thinking about AI policy: If you think you’ve found a silver bullet, it’s a trap. Everything is an improvement to the system and a win on the margins; we’re going to need the full multitude of good ideas here.

Stating we believed that the red teaming process would solve all of the concerns around AI certainly misunderstood or misrepresented our effort. If I’m being honest, anyone pushing red teaming as a fix-all is either being disingenuous or negligent. I’d like to believe the authors of the report know we’re neither.

Our event at DEF CON wasn't solely about red teaming and finding vulnerability disclosures. It was a first step in opening the floodgates to generate awareness of AI and expose more people to it through the lens of AI security. We flew in a group of more than 200 diverse community college students and other partners from 18 states so that they could participate, while learning from and engaging with the hacker community & each other.

At the GRT Challenge - the largest public red teaming event in history - the goal was not to rely on the outcome “as a policy solution and means of achieving safer and more trustworthy AI systems” (although when a participant found a specific issue, it was fed back to the LLM for remediation). Frankly, this exercise was surprisingly novel, to the extent that we had to pilot out what felt like 3/4 of the processes. It’s going to take us several more rounds before I personally feel like we’ve got it fleshed out with a sufficiently useful set of outcomes to evaluate.

But, in the meantime, red teaming is a way to give people hands-on experience so that they can see a place for themselves in the field – and hopefully feel an opportunity to shape their own futures. We believe (and have seen) that sharing these hands-on experiences that deepen people's knowledge drives energy and engagement. We also believe that more engagement in these communities will give people a voice to help drive policy that better reflects their concerns. And, for the next steps from here, red teaming will serve as a security-focused onramp into a host of vetted educational and training tools.

We agree with the report authors' point about the importance of constructing red teaming exercises in a way that results in actionable insight and includes a transparent disclosure and accountability process. We hope that our efforts to pilot the concept of public red teaming will show that – without a diverse group of users – even this more robust process will not serve users or model creators well. And worse, if it’s done without direct engagement with a broader swath of community members, it may shut out the very people who have the most to gain from the next technological revolution.

We must diversify AI security conversations as well as the field itself, and I seriously worry that elitism will perniciously manifest as “capability” or “talent” to lock people out.

That's why we are taking our learnings from DEF CON and expanding the red teaming exercise, as well as adding more educational and learning components. We plan to take these events to communities across the country in the coming year, with a focus to engage people traditionally left out of early technological change so that they have a say this time.

Moving forward, we are working to define “what is red teaming” and how we differentiate between things that reflect individual or societal values with safety issues. The new educational components will also ensure that people see an opportunity to use AI in their current jobs or fields, beyond opportunities to join the AI economy directly. We plan to work with a range of partners to bespoke and evaluate the usefulness of these components as we move across the country, and see what teams we can build across states and disciplines.

There will always be more work to do, but we are certain that AI policy will benefit in insurmountable ways when all people - not just the tech or societal elite - have a voice.

Sincerely,

Austin Carson

Founder & President of SeedAI