WASHINGTON, D.C. - With the largest first-ever public Generative Red Team (GRT) Challenge (www.airedteam.org) just one week away, SeedAI, AI Village, and Humane Intelligence are making final preparations to host the event at AI Village at DEF CON 31. The goal of the GRT Challenge is to advance AI and better understand the risks and limitations of this technology at scale, because open and extensive evaluation and testing bring us closer to inclusive development.
At the GRT Challenge, organizers expect more than 3,000 people from all walks of life - including 120 community colleges students, the varied universe of DEF CON 31 attendees, and others from organizations traditionally left out of the early stages of technological change - to come together to test generative LLMs (large language models) for bias, potential harms, and security vulnerabilities with the aim of progressing AI innovation forward.
Since this is the first instance of a live hacking event of a generative AI system at scale, organizers and participants will be learning together. As the purpose of the event is to break/manipulate these AI systems, organizers and participating companies fully expect that will happen.
The GRT Challenge was designed around the White House Office of Science and Technology Policy's (OSTP) Blueprint for an AI Bill of Rights and reinforces its five principles to protect the American public in the age of artificial intelligence. The GRT Challenge brings an ethical perspective to traditional red teaming by addressing societal impact - including categories to gather vulnerabilities in information integrity, multilingual harms, societal bias, and more.
Planning around the GRT Challenge prioritized and engaged diverse and underserved community partners who will work with the hacker community in order to foster the exchange of ideas and information while simultaneously addressing risks and opportunities. The hacker community is exposed to different ideas and community partners gain new skills that position them for the future.
The Wilson Center Science and Technology Innovation Program (STIP) joins host organizations as a policy partner. Nonprofit community partners include Houston Community College; Black Tech Street from Tulsa, OK; Internet Education Foundation’s Congressional App Challenge; and the AI Vulnerability Database.
The Challenge is also supported by the National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus.
In the future, this exercise will be adapted into educational programming for the Congressional AI Caucus and other officials – as well as for the national networks of our community partners.
Competition Format & Prizes
The GRT Challenge competition will take place between 10 am and 5 pm PT on Friday, August 11, between 9 am and 5 pm PT on Saturday, August 12, and from 9 am to 12 noon PT on Sunday, August 13. The GRT Challenge takes place in the AI Village at DEF CON 31 at Caesar’s Forum in Las Vegas, Nev.
GRT Challenge participants will test large language models (LLM) through challenges designed by the hosts and their community partners. Models built by Anthropic, Cohere, Google, Hugging Face, Meta, NVIDIA, OpenAI, and Stability, with participation from Microsoft, with the testing evaluation platform built by Scale AI. The challenges were designed by a committee with representatives from all stakeholders.
There will be 20 sessions of approximately 150 participants per session, with each session lasting 50 minutes, during which participants will have timed access to multiple LLMs from eight leading vendors. Participants will be competing on secured Google Chromebooks.
The competition will feature a capture the flag (CTF) style point system to promote testing a wide range of harms. Participants will be presented with a “Jeopardy”-style board to select challenges at varying levels of difficulty and point value, with the ability to do as many challenges as possible in the time limit and earn as many points as possible.
There will be some open-ended challenges that allow exploration, and each participant will have access to a locally-hosted copy of Wikipedia for information and verification. There is no open internet access on the Challenge Chromebooks.
Participants can return to do the exercise again following the end of the 50-minute session, with no limits on the number of times they can participate. Multiple sessions from a single participant will not be tracked or linked.
The three participants with the highest points at the end of the Challenge will receive one of three NVIDIA RTX A6000 GPUs.
Every user input and the generated response from the models will be collected, along with the challenge the participant was attempting. As part of the grading process, challenges participants choose to submit for grading will be marked.
Every time the user submits a finding with reasoning the submission is graded based on the prompt and generation with the user reasoning as a guide. The session data from the users will be collected for the post event research into the safety and security of the technologies behind these impressive LLMs.
There is also an optional opt-in demographic survey at the end of the participants’ competition time. This data will be unverified but researchers may use it at their own risk. Demographic data will include: gender, race, experience level, level of education. The dataset will be scrubbed of obvious identifiers to the model name/owner.
Findings & Research
Organizers plan to publish the findings, including process and learnings, to help other organizations conduct similar exercises. The ability to replicate this exercise and put AI testing into the hands of thousands is foundational to its success.
Organizers announced today they will also release the scrubbed data in mid- to late-September 2023 to researchers who agree to the following:
- An embargo of their research paper until February 10, 2024
- An agreement that the stakeholders get to review the paper no later than January 10, 2024
- An agreement that the researchers will delete all copies of the data set by February 15, 2024
A full dataset will be released publicly one year after the conclusion of the GRT Challenge in August 2024. Researchers interested in learning more about the GRT Challenge Call for Research Proposals can visit www.airedteam.org/news/call-for-research-proposals-generative-red-team-challenge.
To learn more about the GRT Challenge, visit www.airedteam.org
# # #
Founded in 2021, SeedAI is a nonprofit advocacy organization working toward responsible development of and increased access to AI. SeedAI coordinates and collaborates with public and private stakeholders, conducts research, and works to facilitate AI resources designed to help communities create transparent, trustworthy, and transformative technology. Without the involvement of a diverse, large community to test and evaluate the technology, AI will simply leave large portions of society behind. To learn more, visit www.seedai.org.
The AI Village is a community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy. We aim to bring more diverse viewpoints to this field and grow the community of hackers, engineers, researchers, and policy makers working on making the AI we use and create safer. We believe that there needs to be more people with a hacker mindset assessing and analyzing machine learning systems. We have a presence at DEF CON, the world’s longest running and largest hacking conference. To learn more, visit https://aivillage.org/.
Founded and led by industry veterans and optimists, Dr. Rumman Chowdhury and Jutta Williams, Humane Intelligence provides staff augmentation services for AI companies seeking product readiness review at-scale. We focus on safety, ethics, and subject specific expertise (eg medical). We are suited for any company creating consumer-facing AI products, but in particular generative AI products. To learn more, visit https://www.humane-intelligence.org/.
GRT Challenge Media Contact: Kelly L. Crummey / 617-921-8099 / email@example.com