版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
PolicyandSociety
ISSN:1449-4035(Print)1839-3373(Online)Journalhomepage:
/journals/rpas20
Governanceofarti?cialintelligence
ArazTaeihagh
Tocitethisarticle:ArazTaeihagh(2021)Governanceofarti?cialintelligence,Policyand
Society,40:2,137-157,DOI:
10.1080/14494035.2021.1928377
Tolinktothisarticle:
/10.1080/14494035.2021.1928377
?2021TheAuthor(s).PublishedbyInformaUKLimited,tradingasTaylor&Francis
Group.
Publishedonline:04Jun2021.
Submityourarticletothisjournal
Articleviews:19643
Viewrelatedarticles
ViewCrossmarkdata
Citingarticles:13Viewcitingarticles
FullTerms&Conditionsofaccessandusecanbefoundat
/action/journalInformation?journalCode=rpas20
Routledge
Taylor&FrancisGroup
POLICYANDSOCIETY
2021,VOL.40,NO.2,137–157
/10.1080/14494035.2021.1928377
Governanceofartificialintelligence
ArazTaeihagh
PolicySystemsGroup,LeeKuanYewSchoolofPublicPolicy,NationalUniversityofSingapore,Singapore
ABSTRACT
TherapiddevelopmentsinArtificialIntelligence(AI)andtheinten-sificationintheadoptionofAIindomainssuchasautonomousvehicles,lethalweaponsystems,roboticsandalikeposeseriouschallengestogovernmentsastheymustmanagethescaleandspeedofsocio-technicaltransitionsoccurring.Whilethereiscon-siderableliteratureemergingonvariousaspectsofAI,governanceofAIisasignificantlyunderdevelopedarea.ThenewapplicationsofAIofferopportunitiesforincreasingeconomicefficiencyandqualityoflife,buttheyalsogenerateunexpectedandunintendedconse-quencesandposenewformsofrisksthatneedtobeaddressed.ToenhancethebenefitsfromAIwhileminimisingtheadverserisks,governmentsworldwideneedtounderstandbetterthescopeanddepthoftherisksposedanddevelopregulatoryandgovernanceprocessesandstructurestoaddressthesechallenges.Thisintro-ductoryarticleunpacksAIanddescribeswhytheGovernanceofAIshouldbegainingfarmoreattentiongiventhemyriadofchal-lengesitpresents.Itthensummarisesthespecialissuearticlesandhighlightstheirkeycontributions.ThisspecialissueintroducesthemultifacetedchallengesofgovernanceofAI,includingemer-ginggovernanceapproachestoAI,policycapacitybuilding,explor-inglegalandregulatorychallengesofAIandRobotics,andoutstandingissuesandgapsthatneedattention.Thespecialissueshowcasesthestate-of-the-artinthegovernanceofAI,aimingtoenableresearchersandpractitionerstoappreciatethechallengesandcomplexitiesofAIgovernanceandhighlightfutureavenuesforexploration.
KEYWORDS
Governance;artificial
intelligence;AI;robotics;publicpolicy
1.Introduction
Artificialintelligence(AI)israpidlychanginghowtransactionsandsocialinteractionsareorganisedinsocietytoday.AIsystemsandthealgorithmssupportingtheiroperationsplayanincreasinglyimportantroleinmakingvalue-ladendecisionsforsociety,rangingfromclinicaldecisionsupportsystemsthatmakemedicaldiagnoses,policingsystemsthatpredictthelikelihoodofcriminalactivitiesandfilteringalgorithmsthatcategoriseandprovidepersonalisedcontentforusers(Helbing,
2019
;Mittelstadt,Allo,Taddeo,Wachter,&Floridi,
2016
).Theabilitytomimicorrivalhumanintelligenceincomplexproblem-solvingsetsAIapartfromothertechnologies,asmanycognitivetasks
CONTACTArazTaeihaghspparaz@.sg;araz.taeihagh@LeeKuanYewSchoolofPublicPolicy,NationalUniversityofSingapore,469BBukitTimahRoad,LiKaShingBuilding,Level2,#02-10259771Singapore?2021TheAuthor(s).PublishedbyInformaUKLimited,tradingasTaylor&FrancisGroup.
ThisisanOpenAccessarticledistributedunderthetermsoftheCreativeCommonsAttribution-NonCommercialLicense(
http://
/licenses/by-nc/4.0/
),whichpermitsunrestrictednon-commercialuse,distribution,andreproductioninanymedium,providedtheoriginalworkisproperlycited.
138A.TAEIHAGH
traditionallyperformedbyhumanscanbereplacedandoutperformedbymachines(Bathaee,
2018
;Osoba&Welser,
2017
;S?tra,
2020
).
Whilethetechnologycanyieldpositiveimpactsforhumanity,AIapplicationscanalsogenerateunexpectedandunintendedconsequencesandposenewformsofrisksthatneedtobeeffectivelymanagedbygovernments.AsAIsystemslearnfromdatainadditiontoprogrammedrules,unanticipatedsituationsthatthesystemhasnotbeentrainedtohandleanduncertaintiesinhuman-machineinteractionscanleadAIsystemstodisplayunexpectedbehavioursthatposesafetyhazardsforitsusers(Heetal.,
2019
;Helbing,
2019
;Knudson&Tumer,
2011
;Lim&Taeihagh,
2019
).InmanyAIsystems,biasesinthedataandalgorithmhavebeenshowntoyielddiscriminatoryandunethicaloutcomesfordifferentindividualsinvariousdomains,suchascreditscoringandcriminalsentencing(Huq,
2019
;Kleinberg,Ludwig,Mullainathan,&Sunstein,
2018
).TheautonomousnatureofAIsystemspresentsissuesaroundthepotentiallossofhumanautonomyandcontroloverdecision-making,whichcanyieldethicallyquestionableoutcomesinmultipleapplicationssuchascaregivingandmili-tarycombat(Firlej&Taeihagh,
2021
;Leenesetal.,
2017
;Solovyeva&Hynek,
2018
).ResponsibilityandliabilityforharmsresultingfromtheuseofAIapplicationsremainambiguousundermanylegalframeworks(Leenesetal.,
2017
;Xu&Borson,
2018
)andtheautomationofroutineandmanualtasksindomainssuchasdataanalysis,service,manufacturinganddrivingenabledbymachine-learningalgorithms,chatbotsanddriverlessvehiclesareexpectedtodisplacemillionsofjobsthatwillnotbeevenlydistributedwithinandacrosscountries(Linkov,Trump,Poinsatte-Jones,&Florin,
2018
;Taeihagh&Lim,
2019
).ManagingthescaleandspeedofAIadoptionandtheirattendantrisksisbecominganincreasinglycentraltaskforgovernments.However,inmanyinstances,thebeneficiariesofthesetechnologiesdonotbearthecostsoftheirrisks,andtheserisksaretransferredtothesocietyorgovernments(Leenesetal.,
2017
;Soteropoulos,Berger,&Ciari,
2018
).
WhilethereisconsiderableliteratureemergingonvariousaspectsofAI,governanceofAIisanemergingbutsignificantlyunderdevelopedarea.ToenhancethebenefitsofAIwhileminimisingtheadverseriskstheypose,governmentsworldwideneedtounder-standbetterthescopeanddepthoftherisksposed.Thereisaneedtoreassesstheefficacyoftraditionalgovernanceapproachessuchastheuseofregulations,taxes,andsubsidies,whichmaybeinsufficientduetothelackofinformationandconstantchanges(Guihot,Matthew,&Suzor,
2017
),andthespeedandscaleofadoptionofAIthreatenstooutpacetheregulatoryresponsestoaddresstheconcernsraised(Taeihagh,Ramesh,&Howlett,
2021
).Assuch,governmentsfacemountingpressurestodesignandestablishnewregulatoryandgovernancestructurestodealwiththesechallengeseffectively.TheincreasingrecognitionofAIgovernanceacrossgovernment,thepublic(Chen,Kuo,&Lee,
2020
;Zhang&Dafoe,
2019
,
2020
)andindustryisevidentfromtheemergenceofnewgovernanceframeworksinthemeta-discourseonAIsuchasadaptiveandhybridgovernance(Leiser&Murray
2016
;Linkovetal.,
2018
;Tan&Taeihagh,
2021b
),andself-regulatoryinitiativessuchstandardsandvoluntarycodesofconducttoguideAIdesign(Guihotetal.,
2017
;IEEE
2019
).Thefirsthalfof2018sawthereleaseofnewAIstrategiesfromoveradozencountries,significantboostsinpledgedfinancialsupportbygovern-mentsforAI,andtheheightenedinvolvementofindustrybodiesinAIregulatorydevelopment(Cath,
2018
),raisingfurtherquestionsregardingwhatideasandinterests
POLICYANDSOCIETY139
shouldshapeAIgovernancetoensureinclusionanddiverserepresentationofall
membersof2016;Jobin,&2019.
society(Hemphill,
Ienca,Vayena,
)
ThisspecialissueintroducesthemultifacetedchallengesofgovernanceofArtificialIntelligence,includingemerginggovernanceapproachestoAI,policycapacitybuilding,andexploringlegalandregulatorychallengesofAIandRobotics.ThisintroductionunpacksAIanddescribeswhytheGovernanceofAIshouldbegainingfarmoreattentiongiventhemyriadofchallengesitpresents.Theintroductionthensummarisesofthespecialissuearticlesarepresented,andtheirkeycontributionsarehighlighted.Thankstothediversesetofarticlescomprisingthisspecialissue;ithighlightsthestate-of-the-artinthegovernanceofAIanddiscussestheoutstandingissuesandgapsthatneedattention,aimingtoenableresearchersandpractitionerstoappreciatethechallengesthatAIbringsbetterandunderstandthecomplexitiesofgovernanceofAIandfutureavenuesforexploration.
2.AI–backgroundandrecenttrends
ConceptionsofAIdatebacktoearliereffortsindevelopingartificialneuralnetworkstoreplicatehumanintelligence,whichcanbereferredtoastheabilitytointerpretandlearnfromtheinformation.Originallydesignedtounderstandneuronactivityinthehumanbrain,moresophisticatedneuralnetworksweredevelopedinthelate20thcenturywiththeaidofadvancementsinprocessingpowertosolveproblemssuchasimageandspeechrecognition(Izenman
2008
).TheseeffortsledtotheintroductionoftheconceptofAIascomputerprograms(ormachines)thatcanperformpredefinedtasksatmuchhigherspeedsandaccuracy.InthemostrecentwaveofAIdevelopmentsfacilitatedbyadvance-mentsinbigdataanalytics,AIcapabilitieshaveexpandedtoincludecomputerprogramsthatcanlearnfromvastamountsofdataandmakedecisionswithouthumanguidance,commonlyreferredtoasMachine-learning(ML)algorithms(Izenman
2008
).Unlikeearlieralgorithmsthatrelyonpre-programmedrulestoexecuterepetitivetasks,MLalgorithmsaredesignedwithrulesabouthowtolearnfromdatathatinvolves‘inferentialreasoning’,‘perception’,‘classification’,and‘optimisation’toreplicatehumandecision-making(Bathaee,
2018
;Linkovetal.,
2018
).Thelearningprocessinvolvesfeedingthesealgorithmswithlargedatasets,fromwhichtheyseekandtestcomplexmathematicalcorrelationsbetweencandidatevariablestomaximisepredictionsofaspecifiedoutcome(Kleinbergetal.
2018
;Brauneis&Goodman,
2018
).Asthesealgorithmsadapttheirdecision-makingruleswithmoreexperience,ML-drivendecisionsareprimarilydepen-dentonthedataratherthanonpre-programmedrulesand,thus,typicallycannotbepredictedwellinadvance(Mittelstadtetal.,
2016
).
AmongAIexpertsandresearchers,thereisabroadconsensusthatAIstill‘fallsshort’ofhumancognitiveabilities,andmostAIapplicationsthathavebeensuccessfultodatestemfrom‘narrowAI’or‘weakAI’,whichrefertoAIapplicationsthatcanperformtasksinspecificandrestricteddomains,suchaschess,image,andspeechrecognition(Bostrom&Ludkowsky
2014
;Lele,
2019b
).NarrowAIisexpectedtoautomateandreplacemanymid-skillprofessionsduetotheirabilitytoexecuteroutine,cognitivetasksatmuchhigherspeedsandaccuracythantheirhumancounterparts(Lele,
2019b
b;Linkovetal.,
2018
).Infuture,itisexpectedthatthisformofAIwilleventuallyachieve‘GeneralAI’or‘a(chǎn)rtificialgeneralintelligence’,alevelofintelligence
140A.TAEIHAGH
comparabletoorsurpassinghumansduetotheabilitytogeneraliseacrossdifferentcontextsthatcannotbeprogrammedinadvance(Bostrom&Ludkowsky
2014
;Wang&Siau,
2019
).ThisintroductionandthearticlescomprisingthisspecialissuefocusonapplicationsofnarrowAI.
BothindustryandgovernmentsworldwidehaveenthusedoverthepotentialsocietalbenefitsarisingfromAIandthus,haveacceleratedthetechnology’sdevelopmentanddeploymentacrossvariousdomains.SomeoftheimpetusesfordeployingAIincludeincreasingeconomicefficiencyandqualityoflife,meetinglabourshortages,tacklingageingpopulationsandstrengtheningnationaldefence,andtheyvarybetweengovern-mentsaccordingtoeachnation’suniquestrategicconcerns(Lele,
2019
;Taeihagh&Lim,
2019
).Forinstance,governmentsinJapanandSingaporehavesupportedtheuseofassistiveandsurgicalrobotsinhealthcareandautonomousvehiclesforpublictranspor-tationtomeetlabourshortagesandtackleageingpopulations(Inagaki,
2019
;SNDGO
2019
;Taeihagh&Lim,
2019
;Tan&Taeihagh,
2021
,
2021b
).Cost-savingsandincreasedproductivityarethemainmotivationsforAIadoptioninvarioussectors,whichisalreadytransformingthemanufacturing,logistic,service,andmaritimeindustries(WorldEconomicForum,
2018
).AI-basedtechnologiesarealsoastrategicmilitaryassetforcountriessuchasChina,US,andRussia,whosegovernmentshavemadesignificantinvestmentsinrobots,dronesandfullyautonomousweaponsystemsfornationaldefenceandgeopoliticalinfluence(Allen,
2019
;Lele,
2019
).
3.UnderstandingtherisksofAI
ManyscholarshighlightthesafetyissuesthatcanarisefromdeployingAIinvariousdomains.AmajorchallengefacedbymostAIapplicationstodatestemsfromtheirlackofgeneralizabilitytodifferentcontexts,inwhichtheycanfaceunexpectedsituationswidelyreferredtoas‘cornercases’thatthesystemhadnotbeentrainedtohandle(Bostrom&Ludkowsky
2014
;Lim&Taeihagh,
2019
;Pei,Cao,Yang,&Jana,
2017
).Forinstance,fatalcrasheshavealreadyresultedfromtrialsofTesla’spartiallyautono-mousvehiclesduetothesystem’smisinterpretationofuniqueenvironmentalconditionsthatithadnotpreviouslyexperiencedduringtesting.Whilevariousmeansofdetectingthesecornercasesinadvancehavebeendevised,suchassimulatingdataonmanypossibledrivingsituationsforautonomousvehicles,notallscenarioscanbecoveredorevenenvisionedbythehumandesigners(Bolte,Bar,Lipinski,&Fingscheidt,
2019
;Peietal.,
2017
).DuetothecomplexityandadaptivenatureofMLprocesses,itisdifficultforhumanstoarticulateorunderstandwhyandhowadecisionwasmade,whichhinderstheidentificationofcornercasebehavioursinadvance(Mittelstadtetal.,
2016
).AsMLdecisionsarehighlydata-drivenandunpredictable,thesystemcanexhibitvastlydifferentbehavioursinresponsetoalmostidenticalinputsthatmakeitdifficulttospecify‘correct’behavioursandverifytheirsafetyinadvance(Koopman&Wagner,
2016
).Inparticular,scholarspointoutpotentialsafetyhazardsthatcanalsoarisefromtheinteractionbetweenAIsystemsandtheirusersduetotheproblemofautomationbias,wherehumansaffordmorecredibilitytoautomateddecisionsduetothelatter’sseeminglyobjectivenatureand,thus,growcomplacentanddisplaylesscautiousbehaviourwhileusingAIsystems(Osoba&Welser,
2017
;Taeihagh&Lim,
2019
).Thus,human-machineinterfacessignificantlyshapethedegreeofsafety,particularlyinsocialsettingsthat
POLICYANDSOCIETY141
involvefrequentinteractionswithuserssuchasrobotsforpersonalcare,autonomousvehicles,andserviceproviders.
Thedecision-makingautonomyofAIsignificantlyreduceshumancontrolovertheirdecisions,creatingnewchallengesforascribingresponsibilityandlegalliabilityfortheharmsimposedbyAIonothers.Existinglegalframeworksfortheascribingofrespon-sibilityandliabilityformachineoperationtreatmachinesastoolsthatarecontrolledbytheirhumanoperatorbasedontheassumptionthathumanshaveacertaindegreeofcontroloverthemachine’sspecification(Matthias
2004
;Leenes&Lucivero,
2014
).However,asAIrelieslargelyonMLprocessesthatlearnandadapttheirownrules,humansarenolongerincontroland,thus,cannotbeexpectedtoalwaysbearrespon-sibilityforAI’sbehaviour.Understrictproductliability,manufacturersandsoftwaredesignerscouldbesubjecttoliabilityformanufacturingdefectsanddesigndefects,buttheunpredictabilityofMLdecisionsimpliesthatmanyerroneousdecisionsmadebyAIarebeyondthecontrolofandcannotbeanticipatedbytheseparties(Butcher&Beridze,
2019
;Kimetal.
2017
;Lim&Taeihagh,
2019
).ThisraisescriticalquestionsregardingtheextenttowhichdifferentpartiesintheAIsupplychainwillbeheldliableindifferentaccidentscenariosandthedegreeofautonomythatissufficientto‘limit’theresponsi-bilityofthesepartiesforsuchunanticipatedaccidents(Osoba&Welser,
2017
;Wirtz,Weyerer,&Sturm,
2020
).Itisalsowidelyrecognisedthatexcessiveliabilityriskscanhinderlong-runinnovationandimprovementstothetechnology,whichhighlightsamajorissueregardinghowgovernmentscanstructurenewliabilityframeworksthatbalancethebenefitsofpromotinginnovationwiththemoralimperativeofprotectingsocietyfromtherisksofemergingtechnologies(Leenesetal.,
2017
).
Giventhevalue-ladennatureofthedecisionsautomatedbyalgorithmsinvariousaspectsofsociety,AIsystemscanpotentiallyexhibitbehavioursthatconflictwithsocietalvaluesandnorms,promptingconcernsregardingtheethicalissuesthatcanarisefromAI’srapidadoption.Oneofthemostintensivelydiscussedissuesacrossindustryandacademiaisthepotentialforalgorithmicdecisionstobebiasedanddiscriminatory.AsMLalgorithmscanlearnfromdatagatheredfromsocietytomakedecisions,theycouldnotonlyconflictwiththeoriginalethicalrulestheywereprogrammedwithbutalsoreproducetheinequalityanddiscriminatorypatternsofsocietythatiscontainedinsuchdata(Goodman&Flaxman,
2017
;Osoba&Welser,
2017
;Piano,
2020
).Ifsensitivepersonalcharacteristicssuchasgenderorraceinthedataareusedtoclassifyindividuals,andsomecharacteristicsarefoundtonegativelycorrelatewiththeoutcomethatthealgorithmisdesignedtooptimise,theindividualscategorisedwiththesetraitswillbepenalisedoverotherswithdifferentgroupcharacteristics(Liu2018).Thiscouldyielddisparateoutcomesintermsofriskexposureandaccesstosocialandeconomicbenefits.Biascanalsobeintroducedthroughthehumandesignerinconstructingthealgorithm,andevenifsensitiveattributesareremovedfromthedata,therearetechniquesforMLalgorithmstouse‘probabilisticallyinferred’variablesasaproxyforsensitiveattributes,whichismuchhardertoregulate(Krolletal.,
2016
;Osoba&Welser,
2017
).TheriskofbiasanddiscriminationstemmingfromtheoptimisationprocessinAIalgorithmsreflectsadominantconcernsurroundingdiscussionsoffairnessinAIgovernance–thetrade-offbetweenequityandefficiencyinalgorithmicdecision-making–(S?tra,
2020
)andhowabalancecanbestrucktoproducesociallydesirableoutcomescateringtothedifferentgroups’ethicalpreferencesremainssubjecttodebate.
142A.TAEIHAGH
AvastbodyofliteratureandgovernmentreportshavehighlightedissuesofdataprivacyandsurveillancethatcanarisefromAIapplications.AsalgorithmsinAIsystemsutilisesensorstocollectdataandbigdatatechnologiestostore,processandtransmitdatathroughexternalcommunicationnetworks,therehavebeenconcernsregardingthepotentialmisuseofpersonaldatabythirdpartiesandincreasingcallsformoreholisticdatagovernanceframeworkstoensurereliablesharingofdatawithinandbetweenorganisations(Gasser&Almeida,
2017
;Janssen,Brous,Estevez,Barbosa,&Janowski,
2020
).AIsystemsstoreextensivepersonalinformationabouttheirusersthatcanbetransmittedtothirdpartiestoprofileindividuals’preferences,suchasusingpasttraveldatacollectedinautonomousvehiclestotailoradvertisementstopassengers(Chenetal.,
2020
;Lim&Taeihagh,
2018
),usingpersonalandmedicalinformationcollectedbypersonalcarerobotsandnetworkedmedicaldevicesforthesurveillanceofindividuals(Guihotetal.,
2017
;Leenesetal.,
2017
;Tan,Taeihagh,&Tripathi,
2021
).TheownershipofsuchdataandhowAIsystemdevelopersshoulddesigntheserobotstoadheretoprivacylawsarekeyconcernsthatremaintobeaddressed(Chenetal.,
2020
;Leenesetal.,
2017
).SurveillanceisalsoakeyconcernovertheuseofAIinmanydomains,suchassurveillancerobotsintheworkplacethatmonitoremployeeperformanceandgovern-mentagenciespotentiallyusingautonomousvehiclestotrackpassengermovementswithnegativeimplicationsfordemocraticfreedomsandpersonalautonomy(Leenesetal.,
2017
;Lim&Taeihagh,
2018
).
TheautonomyassumedbyAIsystemstomakedecisionsinplaceofhumanscanintroduceethicalconcernsintheirapplicationacrossvarioussectors.Studieshaveunderlinedthepotentialforpersonalisationalgorithmsusedbydigitalplatformstounderminethedecision-makingautonomyofdatasubjectsbyfilteringinformationpresentedtousersbasedontheirpreferencesandinfluencingtheirchoices.Byexertingcontroloveranindividual’sdecisionandreducingthe‘diversityofinformation’pro-vided,personalisationalgorithmscanreducepersonalautonomyand,thus,beconstruedasunethical(Mittelstadtetal.,
2016
).Inhealthcare,theuseofrobotstoprovidepersonalcareserviceshaspromptedconcernsoverthepotentiallossofautonomyanddignityofcarerecipientsifrobotsexcessivelyrestrictpatients’mobilitytoavoiddangeroussitua-tions(Leenesetal.,
2017
;Tanetal.,
2021
).Studieshaveyettoexaminehowtheseriskscanbebalancedagainsttheirpotentialbenefitsforautonomyinotherscenarios,suchasautonomousvehiclesincreasingmobilityforthedisabledandelderly(Lim&Taeihagh,
2018
),andpersonalcarerobotsofferingpatientsgreaterfreedomofmovementwiththeassuranceofbeingmonitored(Leenesetal.,
2017
).Inthemilitary,autonomousweaponsystemssuchasdronesandunmannedaerialvehicleshavebeendevelopedtoimprovetheprecisionandreliabilityofmilitarycombat,planningandstrategy,buttherehasbeenincreasingmomentumacrossindustryandacademia,includingprominentfigures,high-lightingtheirethicalandlegalunacceptability(Lele,
2019
;Roff,
2014
).Centraltotheseconcernsisthedelegationofauthoritytoamachinetoexertlethalforce‘independentlyofhumandeterminationsofitsmoralandlegallegitimacy’andthelackofcontrollabilityovertheseadaptivesystemsthatcouldamplifytheconsequencesoffailure,promptingfearsofadystopianfuturewheresuchweaponsinflictcasualtiesandescalatecrisesatamuchlargerscale(Firlej&Taeihagh,
2021
;Scharre,
2016
;Solovyeva&Hynek,
2018
). Unemploymentandsocialinstabilityresultingfromtheautomationofroutinecog-nitivetasksremainsoneofthemostpubliclydebatedissuesconcerningAIadoption
POLICYANDSOCIETY143
(Frey&Osborne,
2017
;Linkovetal.,
2018
).Theeffectsofautomationarealreadyfeltinindustriessuchasthemanufacturing,entertainment,healthcare,finance,andtransport
sectorsascompaniesincreasinglyinvestinAItoreducelabourcostsandboostefficiency(Linkovetal.,
2018
).Whiletechnologicaladvancementshavehistoricallycreatednewjobsaswell,thereareconcernsthatthedistributionofemploymentopportunitiesisunevenacrosssectorsandskilllevels.Studiesshowthathighlyroutineandcognitivetasksthatcharacterisemanymiddle-skilledjobsareatahighriskofautomation.Incontrast,taskswithrelativelylowerrisksofautomationarethosethatmachinescannoteasilyreplicate–thisincludesmanualtasksinlow-skilled,serviceoccupationsthatrequireflexibilityand‘physicaladaptability’,aswellashigh-skilledoccupationsinengineeringandsciencethatrequirecreativeintelligence(Frey&Osborne,
2017
;WorldEconomic;Forum,
2018
).Ashigh-andlow-skilledoccupationsbenefitfromincreasedwagepremiumsandmiddle-skilledjobsarebeingphasedout,automationcouldexacerbateincomeandsocialinequalities(Alonsoetal.
2018
).
4.GoverningAI
4.1WhyAIgovernanceisimportant
UnderstandingandmanagingtherisksposedbyAIiscrucialtorealisethebenefitsofthetechnology.Increasedefficiencyandqualityinthedeliveryofgoodsandservices,greaterautonomyandmobilityfortheelderlyanddisabled,andimprovedsafetyfromusingAIinsafety-criticaloperationssuchasinhealthcare,transportandemergencyresponsearethemanysocio-economicbenefitsarisingfromAIthatcanpropelsmartandsustainabledevelopment(Agarwal,Gurjar,Agarwal,&Birla,
2015
;Lim&Taeihagh,
2018
;Yigitcanlaretal.,
2018
).Thus,asAIsystemsdevelopandincreaseincomplexity,theirrisksandinterconnectivitywithothersmartdevicesandsystemswillalsoincrease,necessitatingthecreationofbothspecificgovernancemechanisms,suchasforhealth-care,transportandautonomousweapons,aswellasabroaderglobalgovernanceframe-workforAI(Butcher&Beridze,
2019
).
4.2ChallengestoAIgovernance
ThehighdegreeofuncertaintyandcomplexityoftheAIlandscapeimposesmanychallengesforgovernmentsindesigningandimplementingeffectivepoliciestogovernAI.ManychallengesposedbyAIstemfromthenatureoftheproblem,whicharehighlyunpredictable,intractableandnonlinear,makingitdif
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 辦公樓公共設(shè)施保潔服務(wù)協(xié)議2025
- 美食類自媒體賬號小李大口吃短視頻運營
- 什么叫做巖土工程
- 核酸檢培訓(xùn)測試題及答案
- 2025年南陽人才引進真題及答案
- 膿毒癥在急診室的快速處理2026
- 2025年九上開學英語試卷及答案
- 租賃燒烤餐桌合同范本
- 技能大賽全部試題及答案
- 山東藝考聯(lián)考真題及答案
- GA 2113-2023警服女禮服
- 國開機考答案-鋼結(jié)構(gòu)(本)(閉卷)
- 紀委談話筆錄模板經(jīng)典
- 消防安全制度和操作規(guī)程
- 叉車安全技術(shù)交底
- 國家預(yù)算實驗報告
- 工業(yè)園區(qū)綜合能源智能管理平臺建設(shè)方案合集
- 附件1:中國聯(lián)通動環(huán)監(jiān)控系統(tǒng)B接口技術(shù)規(guī)范(V3.0)
- 正弦函數(shù)、余弦函數(shù)的圖象 說課課件
- 閉合性顱腦損傷病人護理查房
- 《你看起來好像很好吃》繪本課件
評論
0/150
提交評論