版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領
文檔簡介
PanelfortheFutureofScienceandTechnologyEPRS|EuropeanParliamentaryResearchServiceScientificForesightUnit(STOASTUDYRegulatorydivergencesinthetThisstudyidentifiesandexaminessourcesofregulatorydivergencewithintheAIactregardingtheobligationsandlimitationsuponpublicandprivatesectoractorswhenusingcertainAIsystems.Areflectionuponpossibleimpactsandconsequencesisprovided,andarangeofpolicyoptionsissuggestedfortheEuropeanParliamentthatcouldrespondtotheidentifiedsourcesofdivergence.ThestudyisspecificallyfocusedonthreeapplicationareasofAI:manipulativeAI,socialscoringandbiometricAIsystems.Questionsregardinghowandwhenthosesystemsaredesignatedasprohibitedorhigh-riskandthepotentiallydivergingobligationstowardspublicversusprivatesectoractorsandtherationalebehindit,aredescribed.STOA|PanelfortheFutureofScienceandTechnologyAUTHORSThisstudyhasbeenwrittenbyIlinaGeorgieva,TjerkTimanandMarissaHoekstraofTNOattherequestofthePanelfortheFutureofScienceandTechnology(STOA)andmanagedbytheScientificForesightUnit,withintheDirectorate-GeneralforParliamentaryResearchServices(EPRS)oftheSecretariatoftheEuropeanParliament.ADMINISTRATORRESPONSIBLEPhilipBOUCHER,ScientificForesightUnit(STOA)Tocontactthepublisher,pleasee-mailstoa@ep.europa.euLINGUISTICVERSIONOriginal:ENManuscriptcompletedinMarch2022.DISCLAIMERANDCOPYRIGHTThisdocumentispreparedfor,andaddressedto,theMembersandstaffoftheEuropeanParliamentasbackgroundmaterialtoassistthemintheirparliamentarywork.Thecontentofthedocumentisthesoleresponsibilityofitsauthor(s)andanyopinionsexpressedhereinshouldnotbetakentorepresentanofficialpositionoftheParliament.Reproductionandtranslationfornon-commercialpurposesareauthorised,providedthesourceisacknowledgedandtheEuropeanParliamentisgivenpriornoticeandsentacopy.Brussels?EuropeanUnion,2022.PE729.507ISBN:978-92-846-9459-4doi:10.2861/69586QA-07-22-331-EN-Nhttp://www.europarl.europa.eu/stoa(STOAwebsite)http://www.eprs.ep.parl.union.eu(intranet)http://www.europarl.europa.eu/thinktank(internet)http://epthinktank.eu(blog)RegulatorydivergencesinthedraftAIactExecutivesummaryThisstudyidentifiesandexaminessourcesofregulatorydivergencewithintheAIact(AIA)regardingtheobligationsandlimitationsuponpublicandprivatesectoractorswhenusingcertainAIsystems.Areflectionuponpossibleimpactsandconsequencesisprovided,andarangeofpolicyoptionsissuggestedfortheEuropeanParliamentthatcouldrespondtotheidentifiedsourcesofdivergence.ThestudyisspecificallyfocusedonthreeapplicationareasofAI,beingmanipulativeAI,socialscoringandbiometricAIsystems.Questionsregardinghowandwhenthosesystemsaredesignatedasprohibitedorhigh-riskandthepotentiallydivergingobligationstowardspublicvs.privatesectoractorsandtherationalebehindit,aredescribed.Throughtheuseofexistingexamplesofthesethreeapplicationareasbothbypublic-andprivateactors,thepotentiallydivergingobligationsundertheAIAarecontextualised.ThesedivergencesintheAIAcombinedwithcurrentexamplesofAIapplicationsinthethreeareasofanalysis(biometrics,manipulativesystemsandsocialscoring),leadtoananalysispersection.Thesethreeanalyseswillformthebasisfordiscussionofkeyfindingsofthereport,outofwhichfollowanumberofpolicyoptions.RisksofAIsystemsWhilenotallAIsystemsharbourapotentialharmforindividuals,therearevariousexamplesofbothpublicandprivatesectoruseofAIsystemsthathavecauseddirectorindirectharms.ThequestioniswhetherandhowtheAIAcanmitigateriskscausedbypublicorprivatesectorAI,andhowitoverlapsorintertwineswithothersourcesofEUlaw.ThisstudyfindsthatthereisaconvergenceofrisksbetweenpublicandprivatesectorAIuse.Asboundariesbetweenserviceprovidersandusersareblurring,andAIisincreasinglypartofa'systemofsystems',conductingclear-cutriskassessmentsofAIsystemswillbecomeincreasinglychallenging.Inaddition,thestudydocumentstensionsbetweengeneralandsectoralregulatoryapproacheswhereappropriate.TheAIAproposesproceduralstepstowardsself-regulationofAI,muchinlinewiththesetupoftheGeneralDataProtectionRegulation(GDPR),whileatthesametimeproposingsubstantialmeasuresintheformof,forinstance,aprohibitedlistofAIapplications.Thegovernanceofthislistandtheclassificationofhigh-riskapplicationsorsystemscanleadtodiverginginterpretationsanddevelopmentsofAIsystems.Moreover,thespecificriskassessmentproposedintheAIAcanleadtodivergingriskclassificationsinrelationto,forinstance,ariskassessmentthatneedstobeperformedondataasdemandedbyaGDPRforthatsameAIsystem.Regulatorycoherencecouldbeachievedbybetteraligningriskassessmenteffortsofdigital(andAI-based)systems.Prohibitedpracticesandhigh-riskAIsystemsTheAIAexhibitsanumberofdivergencesinhowitcreatesobligationsforpublicandprivateactors.Thestudydocumentsthemandpointsto(challengesconcerning)regulatorycoherencewithUnionlawwhereappropriate.Wecaution,however,thatgiventhescopeofthisreportweaddressonlysuchdivergencesthatdirectlyorindirectlypertaintothebroaderdiscussiononpublicversusprivateobligationsinrelationtoAIsystems.Below,wesummarisethesedivergingobligationsasstipulatedundertheAIA'sprohibitedpracticesandhigh-risksystems.Wefindthatthetoolsforlawenforcementtodetectdeepfakesareconsideredtobehighrisk,whiledeep-fakesthemselvesfallinthelow-riskcategory.Thisisapeculiardivergencethatappearstobegroundedintheassumptionthatdeepfakes(employedmostlybyprivateactorsforthetimebeing)harbourlessrisksthanAIsystemsinthehandsofapublicactorforthepurposeofdetectingdeepfakes.STOA|PanelfortheFutureofScienceandTechnologyRegardingtheAIA'sregulatory(in)coherencewithEUlawonhowitaddressesmanipulativepractices,wefindtheUnfairCommercialPracticesDirective(UCPD)tobemostrelevant.Thus,thedivergencehereisthescopeofthebanontheuseofmanipulativeAIsystemsforpublicactorsversustheprohibitionscopeforprivateactorsundertheUCPD.IncontrasttoArticle5(2)UCPD,whichcaterstotheprotectionofvulnerablegroupsbeyondthosestrictlyenumeratedinArticle5(3)UCPD,theprohibitionofcertainmanipulativesystemsasdescribedinArticle5(b)AIAfocusesonlyonvulnerabilitiesduetoageandphysicalormentaldisability.Bynotprovidingananalogousalternative,theAIAportraysasignificantgapintheprotectionofpersonswhomightbesubjecttoAImanipulationonthebasisofotherprotectedcharacteristicsunderEUequalitylaw,suchasethnicity,religion,race,sex,etc.Further,theAIArequiresintentinorderfortheprohibitiontobeapplicable,whileitscounterpart-articleintheUCPD(Article5(3)UCPD)protectsthedefinedvulnerablegroupsfromcommercialpracticesthatarealsounintentionallydirectedtowardsthem.AlastAIA-regulatoryincoherencewithotherEUlawhereisthenarrowscopeofthedefinitionofharmintheAIA(physicalorpsychologicalharm),whichcannotbefoundelsewhereinUnionlaw.EUlawusuallyspeaksofharminagenericway,withoutelaboratingontheharmtypesthatfallunderit.ThenarrowscopeofharmisassuchdivergentfromgeneralUnionlaw.alscoringThedivergencesinpublicversusprivatesectorobligationsinthistypeofAIapplicationsrelatetothefactthat1)thebanonsocialscoringforpublicauthoritiesdoesnotextendtotheprivatesector;and2)fromadata-perspective,thegroundsforprohibitionforthepublicsectoraremoredetailed(includingcategoriesonthebasisofwhichdatacannotbederived)thanthoseonuseforprivateactors–thelatteraremerelyobligedtolookatdataqualityanddatagovernance.ThelatterpointbringsustotheAIA'sregulatoryincoherenceinrelationtotheEU'sdataprotectionregime.Externally,theuseofsocialscoringbyprivateactorscreatesissuesbetweenArticle10AIA,Articles22and35GDPRandArticles6and7ofthedraftePrivacyregulation.ItisunclearhowArticle10AIAinteractsandreconcileswithdataprotectionrightsregardingconsentandtherightnottobesubjectedtoautomateddecision-makingandprofiling,tonamejustafew.TheAIAsinglesoutlawenforcementactivitiesinpubliclyaccessiblespacesthatemployreal-timebiometricidentificationsystems(BIS),leavingouttheuseofreal-timeBISbyprivatesectoractors.TheAIA'sapproachtothebaninArticle5(1)(d)differsfromthatintheGDPR,whichdoesnotdistinguishbetweenpublicandprivatedatacontrollers.Further,theprohibitionofArticle5(1)(d)AIAfocusesonBISin''publiclyaccessiblespaces''andappearstobeindirectcontradictiontoRecital(6)AIA,whichwhenclarifyingthenotionofAIsystemsexplicitlyreferstotheireffects''[…]inaphysicalordigitaldimension'.'Lastly,whilethedeploymentofbiometriccategorisationsystems(BCS)andemotionrecognitionsystems(ERS)aslow-riskAIsystemsentrailsmeretransparencyobligations,theydonotapplytolawenforcement.RegardingsystemsthatcanbejustasintrusiveasBIS(andarealsoprohibitedforlawenforcement),lawenforcementheredoesnotevenhaveinformationordisclosureobligations.KeyfindingsWhenevaluatingourfindings,wesummarisethedivergencesandgeneralisetheirmeaningfortheAIA'spurposeandnormativeoutlook.Reflectingontheidentifieddivergences,wenoticethattheserelatetotreatingsimilarpractices(usesofAIsystems)differentlydependingontheactorsthatdeploythem.Weseethelatterinthedichotomyofpublicversusprivatesectorobligationsinrelationtosocialscoring,aswellasinrelationtotheprohibitionofreal-timeBISforlawenforcement.Ouranalysisshows,however,thatRegulatorydivergencesinthedraftAIactVtheseparationofprivateandpublicactors'AIpracticesislessandlessdefendable,astherisklevelsassociatedwithAIusebyeitherpublicorprivateactorsdonotdifferinthepowerasymmetrytheycreatetowardstheindividual.Further,theexampleofdesignatingsystemsthatlawenforcementusesforthedetectionofdeep-fakesashigh-riskversusdesignatingdeep-fakeslowriskingeneral,providesadditionalevidenceforthelackingintheAIA'srisk-basedapproach.Weseesimilartypesofsystemsplacedinadifferentriskcategorywithoutbackinguptherationaletodosowithconcreterisklevelassessmentcriteria.CertainscopingissuesrelatetotherequirementsofharmandintentionalityandtheirrelationshiptotheUCPDandothersourcesofUnionlaw,aswellastothedistinctionofpublicversusprivateupheldbytheAIAincontrasttotheGDPR.ThescopingissuesportrayanurgentneedformoreharmonisationoftheAIAprovisionswithexistingEUlaw.TheprocedurallackofcoherencebringsmoreobviousinconsistenciesbetweentheAIAontheonehand,andtheGDPRandtheePrivacyregulationontheother.OnekeyexamplewouldbethattheproperproceduresaroundtrainingdataforAIsystemsincludingtheobtainingofconsentandfromwhom,aswellasthelegalbasisforsuchprocessing,areunclear.PolicyoptionsAddresstheincoherenceofriskassessmentandintroduceexplicitriskcriteria:TheAIAisdifferentfromotherrecentEUlegislativeendeavourssuchastheGDPRinthesensethatthenormalrisk-basedapproachtoregulatingatechnologyhasbeenexpandedfrombeingaproceduralobligationtoalsoaddasubstantialpart.TheactproposesaclassificationofrisksinrelationtoAIbydividingapplicationsofAIinthreecategories:prohibited,high-riskandlow-risk.However,asevidencedbytheidentifieddivergences,theAIA'sriskcategoriesarenotalwaysappliedconsistentlywhenitcomestopublicorprivateactorsandtheirobligationstomitigatesuchrisks.Whenlookingatriskassessment,apolicyoptionwouldbetomakeveryclearwhattheriskassessmentispreciselyaboutandtoprovidecleardelineationsorcut-offpointsonwhatpartofthesystemneedsassessment.Inaddition,providingguidelinesonhowtheAIAriskassessmentinteractswithotherriskassessmentobligationsputforwardinmanyoftheEUregulationsanddirectivesthatdealwithdigitisation(e.g.thedataprotectionimpactassessment(DPIA)intheGDPR)wouldbeasteptowardsharmonisation.Considerstrengtheninginformationanddisclosureobligationswithwithdrawalrights:TransparencyintheAIAisnotlinkedtoasubjectiverightandremainsassuchatthelevelofprincipleorpolicyaspiration.Further,asadisclosureobligationitisnotapplicabletolawenforcement,limitingtherebyevenfurtherchancesforindividualprotection.AnoptiontoovercomethisunsatisfactorystatewithintheAIAwouldbe1)toclarifyanddirectlystipulateintheAIA'sprovisionshowGDPRrightsandremediesareapplicabletotheaddresseesofAIsystems,especiallysowhendatarightsareinvolved;and2)tofurthercriticallyassesstheconnectionbetweentheAIA'stransparencyobligationsandredressmechanismsbystrengtheninginformationanddisclosureobligationswithwithdrawalrights.Considernon-linearmodesofgoverningandco-regulationstrategies:TheAIA'scurrentapproachofgoverningishierarchical(law-centric)combinedwithformsofself-regulation(techno-centric).Obligationsthatarenotclarifiedinatop-downmannerarelefttotheindustrystandardisationbodiestofigureout,whichseversthechannelsofcommunication.Moreimportantly,thisapproachfocuseslargelyonthematerialfeaturesofAIsystems,omittingtoincorporateinamoreconsistentwaytheunderlyingoremergingsocio-technicalchangesandtheiractualimpactsonindividualsandsociety.TheAIAdoesnotprovideclearmeasuresinplacetoSTOA|PanelfortheFutureofScienceandTechnologymonitorsuchregulatoryeffects,excepttheex-anteriskassessmentandperhaps'by-design'approachesviaregulatorysandboxes.Measuringlong-termeffectsandsocio-technicalchangesasaresultofusingAIsystembymeansofex-postimpactassessmentsiscurrentlylacking.TheAIAcanbere-evaluatedinthisfashiontoconsider1)thetrajectoryanddistributionofAIsystems,includingthefactorsthatdriveitsproliferationinsectors;2)thepoliticalviabilityoftheregulationandhowcertainfeaturesaffectsstakeholderperceptions;and3)potentialwaysforregulatoryleverage.Thiswouldallowlegislators,regulatorsandpolicy-makerstoconsiderissuesarisingfromAIsystems,andtheactivitiesandrolesofprivatepartiesbehindthoseinaholisticmanner.RegulatorydivergencesinthedraftAIactTableofcontents1.Background 12.Researchscope,conceptsandmethodology 32.1.Scope 32.2.Conceptsandmethodology 43.RisksofAIsystems 53.1.Digitisation,risksandharms 53.2.Howtoregulateasaresultofrisk:Sector-specificrulesforprivateactorsandgenericrulesforthepublicsector? 74.TheAIActincontext 84.1.TheAIAct'sobjective 84.2.TheregulatorypackageaccompanyingtheEC'sstrategy'Europefitforthedigitalage' 95.Public-PrivatedivergencesintheAIAct 115.1.Identifieddivergences 115.2.ManipulativeAIsystems 125.2.1.AIAdivergences 125.2.2.Examplesofuseofdeep-fakesfromthepractice 145.2.3.Analysis 145.3.Socialscoring 155.3.1.AIAdivergences 155.3.2.Examplesofuseofsocialscoringfromthepractice 175.3.3.Analysis 185.4.Biometrics 195.4.1.AIAdivergences 195.4.2.ExamplesoftheuseofbiometricAIsystemsfromthepractice 215.4.3.Analysis 23STOA|PanelfortheFutureofScienceandTechnology6.Keyfindingsanddiscussion 256.1.AIAdivergences-treatingsimilarAIsystemsdifferentlydependingontheuser 256.2.Regulatory(in)coherencewithEUlaw 266.2.1.Scope 276.2.2.Procedure 277.PolicyOptions 297.1.PolicyOption1:Addresstheincoherenceofriskassessmentandintroduceexplicitriskcriteria297.2.PolicyOption2:Considerstrengtheninginformationanddisclosureobligationswithwithdrawalrights 307.3.PolicyOption3:Considerco-regulationstrategiesandimpactassessments 318.References 33RegulatorydivergencesinthedraftAIactListofabbreviationsAIAArtificialintelligenceactBCSBiometricCategorisationSystemsBiometricIdentificationSystemsDADataActDigitalmarketsactDPIADataProtectionImpactAssessmentDSADigitalservicesactDGADataGovernanceActEuropeanCommissionEmotionRecognitionSystemsEuropeanUnionGDPRGeneralDataProtectionRegulationLEDLawEnforcementDirectiveUCPDUnfairCommercialPracticesDirectiveRegulatorydivergencesinthedraftAIAct11.BackgroundArtificialintelligence's(AI)uptakeanduseacrossallsectorsofsocietyisincreasinglyseenasthedeterminantofagreat-powerstatusinmattersbothpoliticalandeconomic1.Thelatestadditiontothelandscapeofthedigitalageissubjecttofierceinternationalcompetition,allthewhileregulatorspuzzleonhowtovestAIdevelopmentsinameaningfullegalframeworkthatprotectscitizens'rights,boostsdigitalsovereigntyandcreatesenoughlegalcertaintyforallstakeholdersinvolvedintheAIchain.ThelatterisneedednotonlyforthosewhodevelopanddeployAIsystemsandwhoseprocesseswouldbenefitfromstreamlinedrulesoftheroad,butalsoforthosewhoarethedirectaddresseesof(unaccountedfor)algorithmicharms.TheproposalforaRegulationconcerningAI(theAIActorAIA)presentedbytheEuropeanCommission(EC)on21April20212isoneofthefirstattemptsathorizontalAIregulation3thatharmonisesrulesforthedevelopment,placementonthemarketanduseofAIsystems.FollowinginthefootstepsoftheGeneralDataProtectionRegulation(GDPR),bymeansofwhichtheEUpropagatednormsondataregulationbeyonditsborders,thedraftAIAisasimilarattempttogainaheadstartinAIgovernance.Thelatterisfacilitatedbyitsaterritorial4rationale–theActwouldextendtheEU'sjurisdictiontoallAIsystemsthatproduceoutcomeswithintheEU,irrespectiveofwhetherthesystem'suserorproviderarelocatedwithintheEU.Further,theAIAproposesdifferentregulatoryburdensdependingontheAIsystemathand–itbanscertaintypesofsystems,regulatesundertheumbrellaofahigh-riskregimesuchthatposeathreattofundamentalrightsandsafety,andplacesvoluntaryconstraintsonlessriskysystems–,therebyfollowingarisk-basedapproach.ThisreportfocusesonregulatorychoiceswithintheActthatcreatedivergingobligationsforpublicandprivateactorsaroundprohibitedandhigh-riskAIsystemssuchasmanipulativeAI,socialscoringandbiometrics,andtherationalebehindthem.Forinstance,Art.5(1)(c)AIAprohibitstheuseofsocialscoringsystemsbypublicauthorities,withoutbanningsuchsystemsdeployedbyprivateactors.Thelattersystems,however,overseefinancialflows,insurancepoliciesandclaims,housingapplications,etc.andcontrolassuchaccesstoessential(state-like)services,whichindividualscannotforego.Thus,whileoptingoutisnotaviableoption,thescoringsystemsusedbyprivateprovidersofsuchservicesaresubjecttolessstricterstandardsthantheonesforservicesofsimilarmagnitudeinthepublicsector.AnotherexampleisfoundintheprohibitionofArt.5(1)(d)AIA,whichcoversbiometricfacialrecognitionbylawenforcement,whileleavingthepracticeopenforotherpublicauthoritiesandtheprivatesector.Privateactors,especially,areincreasinglymakinguseofimagerecognitioncamerasincombinationwithbiometrictechnologytoprovideaccesstoshops,banks,etc.5,ortoinstantaneouslyassessaperson'sreactiontoaproductorsituation,andtotherebyattainabetterpositionininducingdesiredconsumerbehaviour.Nexttothemoreobviousimplicationsofindiscriminate(consumer)surveillanceordiscrimination1EuropeanParliament/SpecialCommitteeonArtificialIntelligenceinaDigitalAge,Draftreportonartificialintelligenceinadigitalage(2020/2266(INI),PR_INI(europa.eu),p.8.2EuropeanCommission,ProposalforaRegulationoftheEuropeanParliamentandoftheCouncillayingdownharmonisedrulesonartificialintelligence(ArtificialIntelligenceAct)andamendingcertainUnionlegislativeactsCOM021)206final).3Anothernoteworthyattemptis,forinstance,theUSNationalAIInitiativeAct,whichcameintoforceon1January4Floridi,L.,'TheEuropeanLegislationonAI:aBriefAnalysisofitsPhilosophicalApproach.'Philosophy&Technology(2021),pp.1-8.5See:AIFacialRecognitionandIPSurveillanceforSmartRetail,Banking,andtheEnterprise,Interestingengineering.STOA|PanelfortheFutureofScienceandTechnology2againstcertainpopulationgroupstowardswhichthealgorithmmightbelesssensitive,theuseofbiometricidentificationsystemsinthelastexamplestructurallyaffectsthewayweengagewithdailyoccurrencesorgiveconsent.ThereportplacesincontexttheseandotherexamplesofdivergingobligationsundertheAIActbyrelatingthemtoapplicationsofAIsystemsandtheirharmsinpractice,aswellastootherinstancesofUnionlaw.BymeansofthecomparisonbetweentheAIAandotherEUregulatoryinstruments,aswellasbetweentheoutcomes/harmsofAIsystemsdeployedbypublicvs.privateactors,thestudyseekstoestablishtowhatextenttheAIAadequatelyaddressestheresponsibilitiesthatcomealongwiththedeploymentofsocietally-transformativetechnologies.Inthesamespirit,thereportalsoquestionstraditional,linearconceptionsofagencyandroledivision,highlightingtheAIindustry's'privateordering'6–theregulatorypowerdigitaltechnologieshaveoverus–anditsregulatoryeffects,andbringingthemtotheattentionoflegislators,regulatorsandpolicy-makersinongoingdiscussions.Whileweinvestigateregulatorydivergencerelatingtoprohibitedandhigh-riskAIsystemsintheAIA,wedosothroughtheconceptuallensoflegislativecoherence.Asitwillbeexplainedindetailbelow,weemploycoherencetoassesswhethertheAIA'sdesignandprovisionsdojusticetoitsintentionandnormativeoutlook,aswellastootherprinciplesoflawonrelatedtopicsfoundinexistingandupcominglegalinstrumentsrelatedtoorapplicabletoEurope'sdigitalagenda,ofwhichtheAIActispart.WethusnotonlyexaminetheAct'ssuitabilityforeffectivelyregulatingbiometrics,facerecognitionand/orsocialscoringAIsystems,butalsoidentifytheimplicationsoftheestablisheddiscrepanciesinpublicvs.privateorderinginthestriveforlegalcertainlyandharmonisationofdigitalregulations.Thisreportproceedsasfollows.Chapter2describestheresearchscope,employedconceptsandmethodology.Further,sincetheAIAis(atleastpartially)arisk-basedinstrumentandregulatesAIsystemsaccordingtorisklevels,tobettersubstantiateourargumentchapter3elaboratesonconceptionsofriskandAIharms,amongothersrelatedtotheuseofAIsystemsbypublicandprivateactors.TopavethewayintotheAIA'sunderstanding,chapter4startswithanoverviewoftheAIA'sstatedobjective,aswellasoftheobjectiveoftheregulatorypackagesurroundingtheEC'sdigitalstrategy.Chapter5continuesbydivingintotheselectedAIsystems–manipulativeAIsystems,socialscoringandbiometricsystems.Weprovideanoverviewofthesystems'underlyingtechnology,beforeelaboratingontheprovisionsoftheAIAthataddresssuchsystemsandtheiruseinboththepublicandtheprivatesector.Chapter6summarisestheidentifieddivergencesbeforeofferingpolicyoptionsinChapter7.6Delacroix,S.,'Bewareof'AlgorithmicRegulation'',SSRN,2019.RegulatorydivergencesinthedraftAIAct32.Researchscope,conceptsandmethodology2.1.ScopeThisprojectaimstoidentifyandexaminesourcesofregulatorydivergencewithintheAIAregardinghowpublicandprivatesectoractorsmayusecertainAIsystems,toreflectupontheirpossibleimpactsandconsequences,andtodevelopandassessarangeofpolicyoptionsfortheEuropeanParliamentthatcouldrespondtotheidentifiedissues.Thelatterrequiresconsultinganumberofexistingandp
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 《國家物資應急預案》(3篇)
- 2026年零售業(yè)智慧創(chuàng)新應用報告
- 珠寶平臺活動策劃方案(3篇)
- 電動閘閥施工方案(3篇)
- 礦區(qū)開采施工方案(3篇)
- 秋日系列活動策劃方案(3篇)
- 線上活動方案如何策劃(3篇)
- 置換物品活動策劃方案(3篇)
- 自閉貫通應急預案(3篇)
- 西藏航天活動策劃方案(3篇)
- DB13∕T 6066.3-2025 國資數(shù)智化 第3部分:數(shù)據(jù)治理規(guī)范
- 2025年白山輔警招聘考試題庫及答案1套
- 特種設備外借協(xié)議書
- 三元股份財務風險控制研究
- 2025年廣東高校畢業(yè)生三支一扶考試真題
- DBJ-T 13-417-2023 工程泥漿技術標準
- 湖南省長沙市雅禮教育集團2024-2025學年七年級(下)期末數(shù)學試卷
- 鋁業(yè)廠房建設項目施工組織方案
- DB63-T 2256.3-2025 水利信息化工程施工質量評定規(guī)范 第3部分 水情監(jiān)測系統(tǒng)
- 患者身份識別錯誤應急預案與處理流程
- 25年軍考數(shù)學試卷及答案
評論
0/150
提交評論