版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
March2024ExpertInsightsPERSPECTIVEONATIMELYPOLICYISSUEBILVACHANDRAAnalyzingHarmsfromAI-GeneratedImagesandSafeguardingOnlineAuthenticityThedemocratizationofimage-generatingartificialintelligence(AI)toolswithoutregulatoryguardrailshasamplifiedpreexistingharmsontheinternet.TheemergenceofAIimagesontheinternetbeganwithgenera-tiveadversarialnetworks(GANs),whichareneuralnetworkscontaining1(1)ageneratoralgorithmthatcreatesanimageand(2)adiscriminatoralgorithmtoassesstheimage’squalityand/oraccuracy.Throughseveralcollaborativeroundsbetweenthegeneratoranddiscriminator,afinalAIimageisgenerated(Alqahtani,Kavakli-Thorne,andKumar,2021).ThisPersonDoesNotE,asitecreatedbyanUberengineerthatgeneratesGANimagesofrealisticpeople,launchedinFebru-ary2019toawestruckaudiences(Paez,2019),withseriousimplicationsforexploita-tioninsuchareasofabuseaswidespreadscamsandsocialengineering.ThiswasjustthebeginningforAI-generatedimagesandtheirexploitationontheinternet.Overtime,AIimagegenerationadvancedawayfromGANsandtowarddiffusionmodels,whichproducehigher-qualityimagesandmoreimagevarietythanGANs.DiffusionmodelsworkbyaddingGaussiannoise2tooriginaltrainingdataimagesthroughaforwarddiffusionprocessandthen,throughareverseprocess,slowlyremovingthenoiseandresynthesiz-ingtheimagetorevealanew,cleangeneratedimage(Ho,Jain,andAbbeel,2020).Diffusionmodelsarepairedwithneuralnetworktechniquestomaptext-to-imagecapabili-ties,knownastext-imageencoders(ContrastiveLanguage-ImagePre-training[CLIP]wasamilestoneinthisspace),toallowmodelstoprocessvisualconcepts(Kim,Kwon,andChulYe,2022).Thus,thecommercializationofdif-fusionmodels(DALL-E,StableDiffusion,Midjourney,Imagen,andothers)putthepowerofsyntheticimagegen-erationinthehandsoftheuseronaglobalscale.Theriseofimagegenerationtoolshasintroducedsyntheticformsofsuchsafetyharmsasmis-anddisinformation,extremism,andnonconsensualintimateimagery(NCII),causingfur-therdisarrayanddamageintheinternetecosystem.ThesocietalharmsfromAIimagegenerationtoolshaveyettobeeffectivelyaddressedfromaregulatorystandpointbecauseofanexusofpolicychallenges,suchascopyrightprotection,dataprivacy,ethics,andcontrac-tualrequirements.RecentfearsrisingfromgenerativeAIhavesomewhatmovedthepolicyneedleonAIregula-tion,sparkinggreatinterestinCongressandtheexecutivebranch(asevidencedbytheSenateAIinsightforumsandPresidentJoeBiden’sexecutiveorderonAI,respectively[WhiteHouse,2023b]).However,withoutlegislationaddressingsafetyandsocietalissuesrelatedtogenerativeAI,executingacoherentregulatorystrategytoaddresstheharmfuleffectsofAI-generatedimagesontheinternetisatallorder.(Asofthiswriting,Section230oftheCom-municationsDecencyActof1996servesasthesolepieceoflegislationforinternetregulation[U.S.Code,Title47,Section230].)Inthispaper,IdelveintosafetyharmsandchallengesfromAI-generatedimagesandhowsuchimagesaffectauthenticityontheinternet.Thefirstsectionoutlinestheroleofimageauthenticityontheinternet.Inthesecondsec-tion,Ireviewthetechnicalsafetychallengesandharmsfortheimagegenerationspace,thenlookatindustrysolutionstoauthenticity,includingthepromiseofprovenancesolu-tionsandissueswithimplementingthem.Thethirdsectionoutlinesseveralpolicyconsiderationstotacklethisnewpar-adigmthatlargelyfocusonprovenance,givenitspromiseasanauthenticitysolutionandrelevanceinpolicyconversa-tions.Inthispaper,contentauthenticityinthecontextofimagesreferstoestablishingtransparentinformationaboutimages(bothhumanandAI-generated),whetherinorigin,context,authorship,orotherareas,inawaythatisaccessi-bletousersontheinternet.Throughout,thispaperfocusesAbbreviationsAIartificialintelligenceC2PACoalitionforContentProvenanceandAuthenticityCAICIDContentAuthenticityInitiativecivilinvestigativedemandCLIPCSAMFTCGANGIFCTNCIIContrastiveLanguage-ImagePre-trainingchildsexualabusematerialFederalTradeCommissiongenerativeadversarialnetworkGlobalInternetForumtoCounterTerrorismnonconsensualintimateimageryNationalInstituteofStandardsandTechnologyNISTOECDOrganisationforEconomicCo-operationandDevelopment2oncontentauthenticity,asitcouldplayakeyroleinshap-ingpublictrustinimagecontentbroadlyandcoversawideswathofissues,fromdisinformationtosyntheticNCII.TheissueisnotonlyadecreaseincontentTheRoleofImageAuthenticityontheInternetauthenticitybutalsoalackofknowledgeandtoolsamongmanyuserstohelpnavigatethisparadigmshiftintheinformationdomain.Imageshaveplayedanimportantroleinthehistoryoftheinternet,informingpeopleaboutcurrentevents,sparkingemotionalresponsestowaratrocitiesandinjustice,gal-vanizingindividualstosupportacause,andmuchmore.Researchshowsthatpeopletendtorespondmoreviscer-allytoimagesthantheydototextonline(MedillSchoolofJournalism,2019).Studiessuggestthatthebrainprocessesvisualstimulimorerapidlythanitdoeswords(AlpuimandEhrenberg,2023).Furthermore,imagescanincreaseaviewer’sperceptionofthetruthfulnessofanaccompanyingstatement(AlpuimandEhrenberg,2023).However,theeraoftreatingimagesas“proof”fromasocialstandpointisrapidlychanging.(somephotorealistic)spreadwidelyontheinternet,con-juringfakecrowdsofIsraelismarchinginsupportoftheIsraeligovernmentorunusualimagesofGazanchildreninthemidstofexplosions;theseimageswererarelylabeledasAI-generated(KleeandRamirez,2023).Imageauthenticityontheinternetisinjeopardy,asAI-generatedimageswithoutproofofprovenance,ortheoriginofagivenimage,areaffectinghowpeopleperceivecurrenteventsandpublicfigures.Theissueisnotonlyadecreaseincontentauthenticitybutalsoalackofknowl-edgeandtoolsamongmanyuserstohelpnavigatethisparadigmshiftintheinformationdomain.OpportunisticactorsaretakingadvantageofaccessibleAItoolstoreducetrustincontentandmedia,particularlyduringtumultu-ousperiods,suchasviolentconflicts.AnexampleofthisistheinflammatoryAI-generatedimagesspawnedafterHamas’sOctober7,2023,attackinIsraelandthesub-sequentconflictinGaza.MultipleAI-generatedimagesSolutionstothisproblemthatfocusoncontentauthen-ticitycouldbethemostvaluablebyprovidingindividualswithmoretransparencyaboutcontentthattheyconsumeand,therefore,moreagencyintermsofhowtointerpretthatcontent.Forexample,solutionsthatprovidemetadatainformationtotheuser,suchasauthorship,geolocationdata,andtoolsusedintheeditingofanimage,canallbeusefulforuserinterpretation.Thoughtheresearchbehindauthenticitymeasuresaffectingusertrustincontentisnotconclusive,andcontentauthenticityisnotasilverbulletforsolvingallAI-drivenharmsontheinternet,contentauthenticitysolutionsareastepintherightdirection.3Image-generationtechnologywillcontinuetoevolve,becomemoreadvanced,andlikelybecomemorephotoreal,increasingtheescalationpotentialforharm.Thereisnofoolproofwaytomakethesemodelsentirelysafeforuse.TostartsolvingissuesofAI-generatedimagesfuelingdis-information,harmfulpropaganda,NCII,andothersafetyharms,theUnitedStatesmustfirstdevelopsolutionsinregardtoauthenticityandimprovethepublic’saccesstoinformationabouttheseimages.ThefirststepinnavigatingthisshiftistocomprehendhowAIimagegenerationtoolsproducesafetyissuesandchallengesandhowcurrentsafeguardsareinsufficienttotackletheproblem.challengesareduetobiasesandharmsfromtrainingdata,theexistenceofopen-sourceimagemodels,andapiece-mealapproachtocontentmoderationattheuserlevel.ThecurrentAIimagegenerationspacemainlyconsistsoftext-to-imagediffusionmodels,suchasMidjourney,DALL-E,andStableDiffusion,thatgenerateimagesbasedonuserprompts.Understandingthefundamentalsofsafetyissuesintext-to-imagediffusionmodelscanshowwhythesemodelscanproduceunsafeimages.Furthermore,divingdeepintotrainingdata,open-sourcemodels,andcontentmoderationrevealsthatthesetechnicalmitigationsaresimplynotenoughtopreventthegenerationofharmfulcontent,andtheUnitedStatesneedsauthenticitysolutionstomanagerisksandharms.Imagegenerationmodelsreflectsocietalandrepresen-tationalbiasesontheinternetbecausetheyaretrainedondatascrapedfromtheinternet.Forexample,therearefarmoreimagessexualizingwomenontheinternetcomparedwithsimilarimagesofmenandfarmoreimagesofmeninprofessionalroles(doctor,lawyer,engineer,etc.)thansimilarimagesofwomen.Imagemodelsconceptualizetheserepresentationalbiasesandhavebecomequiteadeptatgeneratingcontentthatbothoversexualizeswomenandhighlightsmeninprofessionalpositions(Heikkil?,2022).Recentresearchshowsthatthereisstillmuchworktobedonetomakeimagemodelssaferandlessbiased,astherearestillsevereoccupationalbiasesinmodels,whichresultintheexclusionofgroupsofpeoplefromgeneratedresults(NaikandNushi,2023).Furthermore,RestofWorldin2023conductedanexperimentwithMidjourneythatshowedthatthetooltypicallyrepresentednationalitiesusingharmfulstereotypes:Imagesofa“Mexicanperson”mainlyshowedamaninasombrero,andimagesofNewSafetyChallengesinImageGenerationSafetychallengesintheAIimagegenerationlandscapebeginatthetechnicallevel,andthemostsignificantsafetyImagegenerationmodelsreflectsocietalandrepresentationalbiasesbecausetheyaretrainedondatascrapedfromtheinternet.4Delhialmostexclusivelyshowedpollutedstreets(Turk,2023).Theinternetisinherentlybiased,giventhenatureofitshumaninputs:Whenyouscrapehighlybiaseddata,youwillgenerateitaswell.nately,thetrade-offbetweensafetyandwhatconstitutesaqualityproductisoftendifficult.Thediscussionofsafetychallengeswithimagegen-erationwouldbeincompletewithouthighlightingtheopen-sourcespaceforthiscriticaltechnology.Thedebatesaroundopen-sourcegenerativeAIareabundant:Thosewhoareinfavorhighlightthebenefitsofopenaccessinimprovingmodelsandtheirsafetycapabilitieswithwiderresearcheraccess,andthoseagainstarefocusedonthepotentialformalignuseandchallengeswithmonitor-ingandcontrollingmodeluse.Withimagegenerationinparticular,openaccesshasbenefitedmaliciousactorsinremovingsafeguardsandgeneratingharmfulcontentatscalewiththeuseoffine-tunedopen-sourcemodels.Forexample,whenStableDiffusionwasopen-sourcedin2022,UnstableDiffusionwasborn.UnstableDiffusionwasalargeserverthatusedStableDiffusion’sopen-sourcemodelwithreducedsafeguardstocreatenot-suitable-for-workplacecontent(WiggersandSilberling,2022).AnevenmorealarmingexampleisCivitAI,asitecreatedforAIimagegenerationthatallowsuserstobrowsethousandsofmodelsinordertogeneratepornographiccontentandsyntheticchildsexualabuseimagery,streamliningthenonconsensualAIporneconomy(Maiberg,2023a).AmorerecentexampleisaStanfordInternetObservatoryinvesti-gationthatfoundhundredsofimagesofconfirmedchildsexualabusematerial(CSAM)inanopendatasetcalledLAION-5B,whichiscommonlyusedinimagegenerationmodels,suchasStableDiffusion(Thiel,2023).Theuseofopen-sourceimagegenerationposesethicalconcernsrelatedtochildexploitation,consent,andfairuse.Fromaconsentandfairuseperspective,suchusecoulddispro-portionatelyaffectsexworkers,whomayhavetheirimagesSafetyharmsinthesemodelsalsostemfromtrain-ingdata.Tostart,datalabelingislargelyoutsourcedtoprovidersthatspecializeinscaledlabeling,whichiscost-effective.However,thisprocesscanintroducebiasesandinaccuraciesinhumanlabelsfortrainingdata(SmithandRustagi,2020).Whendevelopersscrapedatafromtheinternetforimagegeneration,theirmainmethodtoensurethatmodelsaresafeistofiltertrainingdataandattempttoreducetheprevalenceofharmfulcontentinthetrainingprocess(SmithandRustagi,2020).However,thismethodiscontingentonhoweffectivethesefiltersareinrootingoutharmfulcontentwhileensuringthattheyarenottooconservativeinexcludingtrainingdatasothatthemodelsarestilltrainedonawiderangeofdataandretainqualityandcreativityintheirgenerations.Furthermore,evenwithrobustsafetyfiltering,theabilityformodelstodeducecon-ceptsacrossdifferenttypesofbenignimagescanresultinthegenerationofaharmfulimage.Forexample,animagemodelthatistrainedonimagesofbeachesbutnotonpor-nographywillunderstandhowthehumanbodylookswithswimsuitsorminimalclothing.Thesamemodelcouldalsobetrainedonimagesofchildrengoingtoschool,playingoutside,etc.—allofwhicharebenignimages.Thesemodelcapabilitiescombined(withoutfurthersafetymeasures)willlikelyallowthemodeltocreateimagesofscantilycladorevennudechildrenthroughmalignpromptengineer-ingtechniques.However,mostdeveloperswouldstillwanttrainingdatatohaveimagesofbeachesandchildrentoensurehighqualityandeffectivegenerations.Unfortu-5scrapedfromtheinternetduringthemodeltrainingpro-cess,allowingtheirlikenesstobereproducedsyntheticallywithouttheirconsentandwithoutcompensation.Despitethebenefitsofopen-sourcesoftwareensuringgreateraccesstoimagemodels,ithasacceleratedsafetyharms,especiallythoserelatedtosexualcontentandconsent.Last,itisvaluabletonotethecontentmoderationsifiersthatidentifyviolativepromptsandimageoutputclassifiersthatclassifyimagesthatshouldbeblocked,bothofwhichareusedinDALL-E3topreventharmfulgenerations(David,2023).Althoughsuchmoderationismoreholisticthanjustkeyword-blocking,theseclassifiersarenotfoolproof.Specifically,red-teamersforDALL-E3foundthat(1)themodelrefusalsondisinformationcouldbebypassedbyaskingforstylechanges,(2)themodelcouldproducerealisticimagesoffictitiousevents,and(3)publicfigurescouldbegeneratedbythemodelthroughpromptengineeringandcircumvention(OpenAI,2023).Broadly,contentmoderation,throughbothclassifiersandkeyword-blocking,wasneverbuilttoinfercontext,andanattempttodosoispracticallyanimpossibletask.Forexample,userscanattempttobypassmanykeywordandclassifiersafeguardsbyusingvisualsynonyms,suchas“flesh-coloredeggplant”(foraphallicimage),“redliquid”forblood,and“skin-coloredsheertop”togeneratenudity;thelistbecomesendless.Theseharmsandsafetychallengesareshapingcontentauthenticityontheinternet.Thoughmuchofthespotlightofcontentauthenticityisonmis-anddisinformation,theexistenceofsyntheticNCII,syntheticextremistpropa-ganda,andmorealsodirectlyshapesusers’perceptionsofrealityandcancausegreatharmtoindividualsandsociet-ies.Unfortunately,fromatechnicalperspective,eradicat-ingallthesesafetyharmsseemsunlikely,giventhechal-lengesofcontentmoderation,biasandsafetyissueswithtrainingdata,andtheexistenceoftailor-madeopen-sourcemodels.Instead,theUnitedStatesshouldlooktocreategreatertransparencyaroundtheseimagesthroughsolu-tionsthatpromoteaccesstoinformationabouttheoriginofcontent.Thesesolutionsmaybelessfruitfultomitigatesafeguardsofclassifiersandkeyword-blocking,whichareusedinmanyimagegenerationtools,andtodetailwhytheyareinsufficientforsafety.Safety,asafirstprincipleacrosstheAIdevelopmentlifecycle,isthepathtoensur-ingsafeimagegeneration,startingfromfilteringdatapriortotraining.Whenthemajorityofsafetyworkisdoneinamonitoringcapacitythroughcontentmoderation,harm-fulgenerationsarelikelytofallthroughthecracks.ThecurrentimagegenerationmodelsonthemarketcannotentirelypreventthegenerationofNCIIcontent,syntheticCSAM,extremistpropaganda,andmisinformation.Someofthem,suchasStableDiffusion,relyonaweakformofcontentmoderation:blockingspecifickeywordsfrombeingused.StableDiffusionenactedkeyword-blockingforwordsrelatedtothehumanreproductivesystemtoattempttopreventthegenerationofpornography(Heikkil?,2023).Bing’sAIImageGenerator,poweredbyDALL-E3,blockedthekeywords“twintowers”topreventanyharm-fuldepictionsof9/11(David,2023).Despitekeyword-drivensafeguards,botaccountswereabletospreadanAI-generatedimageofthePentagononfire,whichbrieflywentviral(Bond,2023).Keyword-drivenmoderationisatbestapiecemealsolutionthatonlyhelpstomoderatelow-hangingfruitandcanhardlycoverthemultitudeofharmfulnarrativesatscale.Anotherformofmoderationcanbedonethroughclassifiers,suchaspromptinputclas-6syntheticNCIIcontent,giventhatthecoreofmitigatingNCIIisnotjustdecipheringwhetherthecontentisAI-generatedbutalsoensuringlegalrecourseandaccount-abilityforindividualsandentitiesthatdistributeandcreatesuchcontent.However,authenticitysolutionssuchasprovenanceandwatermarkingcouldhelpdissuadeanddisincentivizemaliciousactorsfromusingtoolsthatadoptthesesafeguardsandhelpdebunkphotorealisticNCII,disinformation,andmuchmore.Framingthisissueasoneofcontentauthenticitycouldempowerusersandputtheonusofresponsibilityintermsofsecuringtransparencyandaccountabilityforsafetyissuesinimagegenerationmodelsontechnologyprovid-ersandgovernmententitiesthatcanenforceregulationstocombattheseharms.Authenticitysolutionssuchasprovenanceandwatermarkingcouldhelpdissuadeanddisincentivizemaliciousactors.determiningwhetheranimageisauthentic,despiteadver-sarialmotives.Auser-centricapproachtothisissueiskey,giventhatthesuccessofcontentauthenticityinitiativescanbeshapedbyuserexperiences,personalbias,and/orbeliefsabouttechnology.IndustrySolutionstoPreserveImageAuthenticityTheissueofpreservingauthenticityontheinternetisnotnew.Fakeaccounts,bots,phishingemails,andmorehavebeenpersistentissuesforyears.Challengeswithdeepfakesstartedtosparkmore-deliberateconversationsaboutphotoandvideoauthenticity;in2017,aReddituserexploitedGoogle’sopen-sourcedeep-learninglibrarytopostpor-nographicface-swapimages(Adee,2020).Now,thefighttopreserveauthenticityhasbecomeevenmorecrucial,givenalackofpolicysafeguardsandsufficientplatform-levelenforcement,aswellasthespeedatwhichAIimagegenerationisimproving.WhenexaminingtheissueofharmscausedbyAI-generatedimagesthroughthelensof“authenticity,”itisimportanttothinkthroughwhatkindsoftoolsandtechnologieswillbestsupportindividualsinWatermarking,Hashing,andDetectionTherehavebeenseveraldebatesabouttherightapproachtoauthenticity.Theconceptofwatermarking
hasdomi-natedtheconversationandisatermthattheWhiteHouseincludedinitsvoluntaryAIcommitmentsforindustry(WhiteHouse,2023a).However,thereisconfusionandalackofconsensusinthefieldaboutthedefinitionofawatermarkandhoweffectivewatermarkscanbe.AhelpfulwaytodefinewatermarkingisprovidedbythePartnershiponAI:aformofdisclosurethatcanbevisibleorinvis-ibletotheuserandincludes“modificationsintoapieceofcontentthatcanhelpsupportinterpretationsofhowthe7oughlyvettedhashesofconfirmedCSAM(NationalCenterforMissingandExploitedChildren,undated)andterroristcontent(GIFCT,undated),respectively.Hashingcanproveusefulintermsofsharingcontentamongsocialmediaplatformsthatveryclearlybelongsinspecificcategoriesofabuse,butitislesspracticalforuseacrossavarietyofcontent,giventhathashingoccursretroactivelyandcannotbedonewellatscale.Itisalsosusceptibletoadversarialattacksandisvulnerabletodatabaseintegrityissuesanddiscrepanciescausedbyhumanreviewinthecontentattri-butionprocess(Ofcom,2022).Last,anapproachthathasbeendiscussedforseveralyearsisdetection.Bothestablishedcompaniesandsmallerstartups—suchasIntel(Clayton,2023),Optic(Kovtun,2023),andRealityDefender(Wiggers,2023)—havepro-duceddeepfakedetectionsolutions.Thoughthetechnologycanbepromising,itcomeswithahostofissues.Tradition-ally,inthecybersecurityspace,detectionandevasionareacat-and-mousegame,withdetectionneedingtoconstantlyimproveasbothadversarialactorsandthetechnologyitselfimprove.RealityDefender’schiefexecutiveofficerclaimsthatprovenanceandwatermarkingsolutionsareweaker,giventhattheyrequirebuy-in,andthatRealityDefender’sproduct,whichisfocusedoninference(determiningtheprobabilityofsomethingbeingfake),isamorerobustsolution(Goode,2023).However,evenwithhighratesofefficacy,theonuswouldstillbeonuserstogaugehowmuchtheyshouldtrustapieceofcontentbasedonaprob-abilitymetricalone.Furthermore,currentimagedetectioncapabilitieshaveaccuracyissues,asreportedinaBellingcatinvestigation.BellingcatassessedatoolbyOptic(called“AIorNot”)anddeterminedthatitwassuccessfulinidentify-ingAI-generatedandrealimagesquiteaccurately,exceptDetectionandevasionareacat-and-mousegame,withdetectionneedingtoconstantlyimproveasbothadversarialactorsandthetechnologyitselfimprove.contentwasgeneratedand/oredited”(PartnershiponAI,2023).Watermarking(bothvisibleandinvisible/metadata-based)canbeausefuldisclosureforthegeneralpublicforcontentinterpretation,butitisfarfromaholisticsolution,giventheneedforwatermarkstoberobustagainstadver-sarialattacks,toovercomechallengestosecurewidespreadadoption,andtobeunderstandablebyaconsumeroruseracrossdifferentsocialplatformsandhardwaredevices.Anotherapproachinthefieldishashing,orfinger-printingimagecontent,whichhappensafteranimageiscreated.Cryptographichashingisusedtodetermineexactmatches,whereasperceptualhashingcanfindsimilarmatchesthatmaynotbeexactlythesameimage(Ofcom,2022).ThemeritsofhashinghavebeenparticularlyevidentintheidentificationofCSAMandterroristcontent.Forexample,theNationalCenterforMissingandExploitedChildrenandtheGlobalInternetForumtoCounterTer-rorism(GIFCT)arenonprofitorganizationsthatusehash-sharingplatformswithtechnologycompaniesofthor-8whenAI-generatedimageswerebothcompressedandpho-torealistic,inwhichcaseaccuracydroppedsignificantly(Kovtun,2023).Imagecompression(arelativelycommonpractice)canassistmalignactorsinevadingdetection,especiallyonsocialmediaplatforms,whichgenerallycom-pressalluploadedimages.Detectiontoolsarenotfoolproofandarefragiletominorperturbations.eration(Zhang,Chapman,andLeFevre,2009).InanAIimagegenerationcontext,provenancecouldmaptheoriginofanimagethroughacryptographichashorsignaturethatisappliedandattachedtothecontent,isstoredsecurelythroughencryption,andis“tamper-evident”orisabletoshowwhethertheimagehasbeenalteredinanyway.Meta-datainformation,availabletousersthroughlabels,couldalsohelpdefinethetrustworthinessofanimage.Inprac-tice,makingprovenanceasuccessgoesfarbeyondacryp-tographicsignatureandrequiresthewidespreadadoptionofoneormanyinteroperableframeworksacrossdifferentmediums:hardware(e.g.,cameras,smartphones),editingsoftware(e.g.,photoeditingprograms,face-swapapps),andpublishingandsharingentities(e.g.,newsmedia,socialmediaplatforms),whichisaverychallengingtaskinpractice.Theindustryecosystemhasralliedaroundtheappli-cationofprovenancetosupportcontentauthenticityeco-systemsforAI-generatedimages.IndustryleadersincludeAdobe,Intel,Microsoft,andTruepic,allofwhicharemembersoftheContentAuthenticityInitiative(CAI)andtheCoalitionforContentProvenanceandAuthenticity(C2PA).Bothgroupsarefocusedoncross-industrypar-ticipationtotackletheissuesofmediatransparencyandcontentprovenance,withtheC2PAframeworkunderlyingmanyoftheseinitiatives,andhavereleasedproductsthatusetheC2PAframework.Forexample,Adobelaunchedits“ContentCredentials”featurein2023,whichusestheC2PAstandardtoallowtheattachmentofsecure,tamper-evidentmetadataonanexportordownload(Quach,2023).TheC2PAframeworkisaninteroperablespecificationthat“enablestheauthorsofprovenancedatatosecurelybindstatementsofprovenancedatatoinstancesofcontentusingProvenanceThoughnoindividualsolutiontocontentauthenticityisholistic,provenance
isemergingasausefultooltopro-activelypreserveoriginmetadataand/oranyeditingorchangestoagivenpieceofcontent.Detectionmethodsriskbeinglessusefulastechnologycontinuestoimproveandevolvewhileplacingtheonusofusingdetectiontoolsonausereverytimetheycomeacrosscontentthattheydeemtobesuspicious.Furthermore,suchmethodscanlackaccu-racy,furtherobfuscatingthedecisionmakingprocessforanindividualtoassesscontentforitsauthenticity.Prove-nanceapproaches,suchasestablishingtheoriginofapieceofcontentthroughsecuredmetadata,aregenerallymorerobust,giventhattheyfocusontheoriginofcontentratherthanprovingwhethersomethingisrealorfake.Further-more,whenimplementedwell,provenancesolutionscanbeincorporatedacrossthecontentsupplychain—inAIimagegenerationtools,socialmediaplatforms,newssites,andmore—sothatmetadatainformationisreadilyavailabletoauserandcancomplementandbeusedintandemwithwatermarkingandfingerprintinginitiatives.Examiningtheoriginalapproachesto
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 小學(xué)校本作業(yè)培訓(xùn)制度
- 幼兒教師培訓(xùn)制度及流程
- 園本培訓(xùn)檔案管理制度
- 村文藝演出隊(duì)培訓(xùn)制度
- 出租車企業(yè)員工培訓(xùn)制度
- 報(bào)社新職員培訓(xùn)制度
- 機(jī)構(gòu)內(nèi)感染知識(shí)培訓(xùn)制度
- 營(yíng)業(yè)線施工安全培訓(xùn)制度
- 電力宣傳教育培訓(xùn)制度
- 音樂(lè)培訓(xùn)內(nèi)部管理制度
- 電大??啤豆残姓W(xué)》簡(jiǎn)答論述題題庫(kù)及答案
- 2025成人高考全國(guó)統(tǒng)一考試專升本英語(yǔ)試題及答案
- 代辦煙花爆竹經(jīng)營(yíng)許可證協(xié)議合同
- 國(guó)企員工總額管理辦法
- 企業(yè)級(jí)AI大模型平臺(tái)落地框架
- TD/T 1036-2013土地復(fù)墾質(zhì)量控制標(biāo)準(zhǔn)
- 蘇教版六年級(jí)數(shù)學(xué)上冊(cè)全冊(cè)知識(shí)點(diǎn)歸納(全梳理)
- 車位包銷合同協(xié)議模板
- 病歷書寫規(guī)范版2025
- 中鐵物資采購(gòu)?fù)稑?biāo)
- 泄漏管理培訓(xùn)課件
評(píng)論
0/150
提交評(píng)論