Lessonsfromredteaming100generativeAIproductsAuthoredby:MicrosoftAIRedTeamAuthorsBlakeBullwinkel,AmandaMinnich,ShivenChawla,GaryLopez,MartinPouliot,WhitneyMaxwell,JorisdeGruyter,KatherinePratt,SaphirQi,NinaChikanov,RomanLutz,RajaSekharRaoDheekonda,Bolor-ErdeneJagdagdorj,EugeniaKim,JustinSong,KeeganHines,DanielJones,GiorgioSeveri,RichardLundeen,SamVaughan,VictoriaWesterhoff,PeteBryan,RamShankarSivaKumar,YonatanZunger,ChangKawaguchi,MarkRussinovich2Lessonsfromredteaming100generativeAIproductsTableofcontents304Abstract07Redteamingoperations09Casestudy#1Jailbreakingavisionlanguagemodeltogeneratehazardouscontent12Lesson4Automationcanhelpcovermoreoftherisklandscape05Introduction08Lesson1Understandwhatthesystemcandoandwhereitisapplied10Lesson3AIredteamingisnotsafetybenchmarking12Lesson5ThehumanelementofAIredteamingiscrucial05AIthreatmodelontology08Lesson2Youdon’thavetocomputegradientstobreakanAIsystem11Casestudy#2AssessinghowanLLMcouldbeusedtoautomatescams13Casestudy#3EvaluatinghowachatbotrespondstoauserindistressLessonsfromredteaming100generativeAIproducts14Casestudy#4Probingatext-to-imagegeneratorforgenderbias14Lesson6ResponsibleAIharmsarepervasivebutdifficulttomeasure15Lesson7LLMsamplifyexistingsecurityrisksandintroducenewones16Casestudy#5SSRFinavideo-processingGenAIapplication17Lesson8TheworkofsecuringAIsystemswillneverbecomplete18ConclusionAbstractInrecentyears,AIredteaminghasemergedasapracticeforprobingthesafetyandsecurityofgenerativeAIsystems.Duetothenascencyofthefield,therearemanyopenquestionsabouthowredteamingoperationsshouldbeconducted.Basedonourexperienceredteamingover100generativeAIproductsatMicrosoft,wepresentourinternalthreatmodelontologyandeightmainlessonswehavelearned:1.Understandwhatthesystemcandoandwhereitisapplied2.Youdon’thavetocomputegradientstobreakanAIsystem3.AIredteamingisnotsafetybenchmarking4.Automationcanhelpcovermoreoftherisklandscape5.ThehumanelementofAIredteamingiscrucial6.ResponsibleAIharmsarepervasivebutdifficulttomeasure7.Largelanguagemodels(LLMs)amplifyexistingsecurityrisksandintroducenewones8.TheworkofsecuringAIsystemswillneverbecompleteBysharingtheseinsightsalongsidecasestudiesfromouroperations,weofferpracticalrecommendationsaimedataligningredteamingeffortswithrealworldrisks.WealsohighlightaspectsofAIredteamingthatwebelieveareoftenmisunderstoodanddiscussopenquestionsforthefieldtoconsider.4Lessonsfromredteaming100generativeAIproducts5IntroductionAsgenerativeAI(GenAI)systemsareadoptedacrossanincreasingnumberofdomains,AIredteaminghasemergedasacentralpracticeforassessingthesafetyandsecurityofthesetechnologies.Atitscore,AIredteamingstrivestopushbeyondmodel-levelsafetybenchmarksbyemulatingreal-worldattacksagainstend-to-endsystems.However,therearemanyopenquestionsabouthowredteamingoperationsshouldbeconductedandahealthydoseofskepticismabouttheefficacyofcurrentAIredteamingefforts[4,8,32].Inthispaper,wespeaktosomeoftheseconcernsbyprovidinginsightintoourexperienceredteamingover100GenAIproductsatMicrosoft.Thepaperisorganizedasfollows:First,wepresentthethreatmodelontologythatweusetoguideouroperations.Second,weshareeightmainlessonswehavelearnedandmakepracticalrecommendationsforAIredteams,alongwithcasestudiesfromouroperations.Inparticular,thesecasestudieshighlighthowourontologyisusedtomodelabroadrangeofsafetyandsecurityrisks.Finally,weclosewithadiscussion...