Google has organised the Adversarial Nibbler Challenge for selected students of the University of Cape Coast (UCC).
The Adversarial Nibbler Challenge is aimed at crowdsourcing a diverse set of failure modes and rewarding challenge participants for successfully finding safety vulnerabilities.
Adversarial Nibbler is a data-centric AI competition that aims to construct a diverse set of insightful examples of long tail problems for text-to-image models. This way, it can help identify blind spots in harmful image production.
The competition is to engage the students to collect a broad set of failure modes to enable the improvement of fairness in AI.
The Project Lead, Dr. Stephen Moore, in an interview, said with the recent advancement of generative AI, the role of data is even more crucial for successfully developing more factual and safe models.
He added that the idea was to test AI as regards some of the developments against biases, explicit and implicit implications.
“... specifically, Adversarial Nibbler focuses on data used for safety evaluation of generative text-to-image models,” he added.
Dr. Moore continued: “Our competition aims to gather a wide range of long-tail and unexpected failure modes for text-to-image models to identify as many new problems as possible and use various automated approaches to expand the dataset to be useful for training, fine-tuning, and evaluation.”
He encouraged the students to see this opportunity as a stepping stone to more achievements in the future.
The competition is expected to end in October 2024.
Source: Documentation and Information Section-UCC