人工智能在運營技術(shù)中的安全集成原則_第1頁
人工智能在運營技術(shù)中的安全集成原則_第2頁
人工智能在運營技術(shù)中的安全集成原則_第3頁
人工智能在運營技術(shù)中的安全集成原則_第4頁
人工智能在運營技術(shù)中的安全集成原則_第5頁
已閱讀5頁,還剩43頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

TLP:CLEAR

Y&NFRA

C

Cu

RITYAG

PrinciplesfortheSecureIntegrationofArtificialIntelligenceinOperationalTechnology

Publication:December3,2025

U.S.CybersecurityandInfrastructureSecurityAgencyAustralianSignalsDirectorate’sAustralianCyber

SecurityCentre

U.S.NationalSecurityAgency’sArtificialIntelligenceSecurityCenter

U.S.FederalBureauofInvestigation

CanadianCentreforCyberSecurity

GermanFederalOfficeforInformationSecurityNetherlandsNationalCyberSecurityCentre

NewZealandNationalCyberSecurityCentre

UnitedKingdomNationalCyberSecurityCentre

ThisdocumentismarkedTLP:CLEAR.Disclosureisnotlimited.SourcesmayuseTLP:CLEARwheninformationcarries

minimalornoforeseeableriskofmisuse,inaccordancewithapplicablerulesandproceduresforpublicrelease.Subjecttostandardcopyrightrules,TLP:CLEARinformationmaybedistributedwithoutrestriction.Formoreinformation,see

TrafficLightProtocol(TLP)DefinitionsandUsage

.

TLP:CLEAR

PrinciplesfortheSecureIntegrationofArtificialIntelligenceinOperationalTechnologyTLP:CLEAR

CISA|ASD’sACSC|NSAAISC|FBI|CyberCentre|BSI|NCSC-NL|NCSC-NZ|NCSC-UK

Page2of25TLP:CLEAR

TableofContents

Introduction 3

ImportantTerminology 3

Scope 4

TypesofAITechniques 4

AIApplicationsAccordingtothePurdueModel 5

PrinciplesfortheSecureIntegrationofAIinOT 7

Principle1–UnderstandAI 7

1.1UnderstandtheUniqueRisksofAIandPotentialImpacttoOT 7

1.2UnderstandtheSecureAISystemDevelopmentLifecycle 9

1.3EducatePersonnelonAI 10

Principle2–ConsiderAIUseintheOTDomain 11

2.1ConsidertheOTBusinessCaseforAIUse 11

2.2ManageOTDataSecurityRisksforAISystems 12

2.3UnderstandingtheRoleofOTVendorsinAIIntegration 13

2.4EvaluateChallengesinAI-OTSystemIntegration 14

Principle3–EstablishAIGovernanceandAssuranceFrameworks 16

3.1EstablishGovernanceMechanismsforAIinOT 16

3.2IntegratingAIIntoExistingSecurityandCybersecurityFrameworks 17

3.3ConductThoroughAITestingandEvaluation 17

3.4NavigatingRegulatoryandComplianceConsiderationsforAIinOT 18

Principle4–EmbedOversightandFailsafePracticesIntoAIandAI-EnabledOTSystems 18

4.1EstablishMonitoringandOversightMechanismsforAIinOT 18

4.2EmbedSafetyandFailsafeMechanisms 20

Conclusion 21

Resources 21

Disclaimer 22

Acknowledgements 22

VersionHistory 22

Appendix:Terminology 23

References 25

PrinciplesfortheSecureIntegrationofArtificialIntelligenceinOperationalTechnologyTLP:CLEAR

CISA|ASD’sACSC|NSAAISC|FBI|CyberCentre|BSI|NCSC-NL|NCSC-NZ|NCSC-UK

Page3of25TLP:CLEAR

Introduction

SincethepublicreleaseofChatGPTinNovember2022,artificialintelligence(AI)hasbeenintegratedintomanyfacetsofhumansociety.Forcriticalinfrastructureownersandoperators,AIcanpotentiallybeusedtoincreaseefficiencyandproductivity,enhancedecision-making,savecosts,andimprovecustomer

experience.Despitethemanybenefits,integratingAIintooperationaltechnology(OT)environmentsthatmanageessentialpublicservicesalsointroducessignificantrisks—suchasOTprocessmodelsdriftingovertimeorsafety-processbypasses—thatownersandoperatorsmustcarefullymanagetoensurethe

availabilityandreliabilityofcriticalinfrastructure.

Thisguidance—co-authoredbytheCybersecurityandInfrastructureSecurityAgency(CISA)andAustralianSignalsDirectorate’sAustralianCyberSecurityCentre(ASD’sACSC)incollaborationwiththeNational

SecurityAgency’sArtificialIntelligenceSecurityCenter(NSAAISC),theFederalBureauofInvestigation

(FBI),theCanadianCentreforCyberSecurity(CyberCentre),theGermanFederalOfficeforInformation

Security(BSI),theNetherlandsNationalCyberSecurityCentre(NCSC-NL),theNewZealandNationalCyberSecurityCentre(NCSC-NZ),andtheUnitedKingdomNationalCyberSecurityCentre(NCSC-UK),hereafterreferredtoasthe“authoringagencies”—providescriticalinfrastructureownersandoperatorswithpracticalinformationforintegratingAIintoOTenvironments.Thisguidanceoutlinesfourkeyprinciplescritical

infrastructureownersandoperatorscanfollowtoleveragethebenefitsofAIinOTsystemswhilereducingrisk:

1.UnderstandAI.UnderstandtheuniquerisksandpotentialimpactsofAIintegrationintoOTenvironments,theimportanceofeducatingpersonnelontheserisks,andthesecureAI

developmentlifecycle.

2.ConsiderAIUseintheOTDomain.AssessthespecificbusinesscaseforAIuseinOTenvironmentsandmanageOTdatasecurityrisks,theroleofvendors,andtheimmediateandlong-term

challengesofAIintegration.

3.EstablishAIGovernanceandAssuranceFrameworks.Implementrobustgovernancemechanisms,integrateAIintoexistingsecurityframeworks,continuouslytestandevaluateAImodels,and

considerregulatorycompliance.

4.EmbedSafetyandSecurityPracticesIntoAIandAI-EnabledOTSystems.ImplementoversightmechanismstoensurethesafeoperationandcybersecurityofAI-enabledOTsystems,maintaintransparency,andintegrateAIintoincidentresponseplans.

TheauthoringagenciesencouragecriticalinfrastructureownersandoperatorstoreviewthisguidanceandactiontheprinciplessotheycansafelyandsecurelyintegrateAIintoOTsystems.

ImportantTerminology

ThescopeofthisguidancespecificallycovershowcriticalinfrastructureownersandoperatorscanhelpensurethesafetyandsecurityofAIsystemsinOTenvironments.Assuch,theauthoringagenciesusethefollowingspecificdefinitionsfortermsinthisguidanceinordertoavoidconflationwiththeirdefinitionsinothercontexts:

PrinciplesfortheSecureIntegrationofArtificialIntelligenceinOperationalTechnologyTLP:CLEAR

CISA|ASD’sACSC|NSAAISC|FBI|CyberCentre|BSI|NCSC-NL|NCSC-NZ|NCSC-UK

Page4of25TLP:CLEAR

Artificialintelligence(AI)isasystemthatusesmachine-andhuman-basedinputstomakepredictions,recommendations,ordecisionsinfluencingrealorvirtualenvironments.1

Safetyreferstophysicalsafety(formally,functionalsafety)inanOTenvironment.OTsystems

controlphysicalsystemsthatcanharmpeopleorproperty,suchassystemsthatdeliverbiologicalorchemicalagents,controloperationsforadamorwastewatertreatment,orautomatetheflowofvehicletraffic.Inthisguidance,“safety”asawordonitsownalwaysreferstofunctionalsafety.

Security(usedinterchangeablyinthisguidancewith“informationsecurity”and“cybersecurity”)referstoensuringthesecurityproperties—suchasconfidentiality,integrity,andavailability—ofinformationandinformationsystems.

See

Appendix:Terminology

forafulllistofdefinitionswithinthescopeofthisguidanceandsourcesforthesedefinitions.

Scope

Machinelearning(ML),statisticalmodeling,andalgorithmiccalculationsareallsubsetsofAItechniquesthathavebeenusedincriticalinfrastructureengineeringprocessesformanyyears.WhileMLand

traditionalstatisticalmodelingarebothusedforpredictingoutcomesormakingdecisionsbasedondata,theydifferintheirapproach,assumptions,applications,andconsiderationsforsecureintegrationwithOTsystems.ThescopeofthisguidancefocusesonML-andlargelanguagemodel(LLM)-basedAIandAI

agentsbecauseintegratingOTwiththesetypesofAIsystemsinvolvesmorecomplexsafetyandsecurityconsiderations.However,thisguidancemayalsobeappliedtosystemsaugmentedwithtraditional

statisticalmodelingandotherlogic-basedautomation.ThefollowingsubsectionsdefinethesedifferentAItechniques.

TypesofAITechniques

Traditionalstatisticalmodelingusesmathematicalformulastoaccuratelydescribetherelationships

betweenvariables.Itassumesthatthedatafollowscertaindistributionsandthattherelationshipsare

eitherlinearorcanbeapproximatedbylinearmodels.Statisticalmodelingusestechniquessuchas

regressionanalysis,hypothesistesting,andconfidenceintervalstodirectlyestimatemodelparameters

andmakepredictions.Itiscommonlyusedfortaskssuchasforecasting,optimization,andassistingin

operatordecision-making.Non-machine-learning-basedAIsystemsemployalgorithmstoautomate

decision-makingandcontrolprocesses;inOTsystems,thisincludesladderlogicautomationroutinesandaclassofsafetyinstrumentedsystems.

Machinelearningsystemsusealgorithmstolearnfromdataandmakepredictionsordecisionswithoutbeingexplicitlyprogrammed.TheMLmodelcanhandlecomplexrelationshipsandnon-linearinteractionsbetweenvariables.MLmodelsusevarioustechniques—suchassupervised,unsupervised,and

reinforcementlearning—whendevelopingrepresentationsandmakingpredictionsbasedondata.MLis

1ThisdocumentusesthisAIdefinitionfrom

15U.S.C.9401(3)

;however,definitionsofAImayvaryamonggroupsandjurisdictions.

Page5of25TLP:CLEAR

commonlyusedinfieldslikecomputervision,naturallanguageprocessing,androboticsfortaskssuchasimageclassification,speechrecognition,andautonomousdriving.

LargelanguagemodelsareadvancedMLmodelsdesignedtounderstandanaturallanguagepromptandgeneratearesponsethathumanscanunderstand.LLMsusepatternsinlanguageandmultimodal

datasetsintheproductionofcomplexresponsestouserprompts.LLMengineersusuallybuildin

randomnesswhengeneratingoutputs2sothattheLLMsdontalwaysproducethesameresponsetothesameinputs.LLMscanpowergenerativeAIapplicationsthatsupportcriticalinfrastructureentitiesby

enhancingdecision-making,automatingroutinetasks,andoptimizingmaintenanceschedules,withthegoalofimprovingefficiencyandreliabilityinoperations.

AIagentsareatypeofsoftwarethatcanprocessdata,performdecision-makingcapabilities,andinitiateautonomousactionsusingAIandMLmodels.TherearemanytypesofagenticAIsystems,including

systemsthatuseLLMstopowergenerativeAIapplicationsoragentsandsystemsthatcombinedifferentMLtechniques,perspectivesofanalysis,decision-makingmethodologies,andautonomousaction

capabilities.LikeLLMs,theycanenhancedecision-making,automateroutinetasks,andoptimize

maintenanceschedules,whichenablesthemtoimproveandstreamlinecriticalinfrastructureoperations.Implementingerror-checkingcanimproveAIagentsperformancebyavoidingproblemsandensuringitsoutputsarewithintheexpectedbounds.

AIApplicationsAccordingtothePurdueModel

ThePurdueModelisstillawidelyacceptedframeworkforunderstandingthehierarchicalrelationships

betweenOTandITdevicesandnetworks.

Table1

showsexamplesofestablishedandpotentialAI

applicationsincriticalinfrastructureaccordingtothePurdueModel.3MLtechniques,suchaspredictive

models,aretypicallyusedinoperationallayers(03),whileLLMsaretypicallyusedinthebusinesscontext(45),potentiallyondataexportedfromtheOTnetwork.

Table1.AIApplicationsAccordingtothePurdueModel

Level

Description

ExampleAIUses

Level0:FieldDevices

Sensors,actuators,andother

devicesthatinteractwithphysicalprocesses.

OTdatasource:FielddevicesmaygenerateOTdatathatcanbeusedfortrainingAImodels(primarily

predictiveMLmodels)oridentifyingsignificantdeviations.

2SanderShulhoff,“BasicLLMSettings,”LearnPrompting,lastmodifiedMarch10,2025,

/docs/intermediate/configuration_hyperparameters

.

3TheversionofthePurdueModelusedinthisguidancewassourcedfromManuelHumbertoSantanderPelaez,

“ControllingNetworkAccesstoICSSystems,”Diaries(blog),SANSTechnologyInstituteInternetStormCenter,July3,2023,

/diary/30000

.

Page6of25TLP:CLEAR

Level

Description

ExampleAIUses

Level1:LocalControllers

Apparatusandsystemsdesignedtoofferautomatedregulationofaprocess,cell,orline;examples

includeprogrammablelogiccontrollers(PLCs)andremoteterminalunits(RTUs).

AIforlocalcontrol:SomemodernPLCsoredge

controllersexecutelightweight,pre-trainedpredictivemodelsforclassificationfortaskslikelocalanomalydetection,loadbalancing,andmaintainingaknownsafestate.

Level2:LocalSupervisory

Observationandmanagerial

oversightforanindividualprocess,line,orcell;examplesinclude

supervisorycontrolanddata

acquisition(SCADA)systems,

distributedcontrolsystems(DCSs),andhuman-machineinterfaces

(HMIs).

Qualitycontrol:AImodels(primarilypredictiveMLmodels)maybeusedforanalyzingdatafromtheSCADAsystemorDCStodetectearlysignsof

equipmentanomaliesandalertoperatorsthatcorrectiveactionmayberequired.

Level3:Site-Wide

Supervisory

Monitoring,supervisory,and

operationalsupportforallorpartoftheregionscoveredbythe

company;examplesinclude

manufacturingexecutionsystemsandhistorians.

Predictivemaintenance:AImodels(primarily

predictiveMLmodels)maybeusedforanalyzingaggregatedhistorianOTdataandpredicting

equipmentmaintenancerequirements.

Supportoperatordecision-making:AImodelsmayalsobeintegratedintolocalsupervisorysystemstoprovidesystemrecommendationsthatsupport

operatordecision-making,suchasoperationsmeasurement.

Levels4&5:

Enterprise&Business

Networks

ITsystemsthatmanagebusinessandcorporateprocessesand

decisions;inthecontextofcriticalinfrastructureandOT,examplesincludeOTdataanalysisand

autonomousdefenseforbothOTandITsystems.

Workflowoptimization:AIsystems(includingAIagentsandLLMs)maybeusedforimprovingbusinessprocesses,suchastheintersectionbetweenbusinessusecasesandengineering.

BehavioralanalyticsandprofilingofOTandITdata:AIcanbeusedforanalyzingOTdatainconjunctionwithITdatatomeasureoperations,performanomalyandthreatdetection,determinehardening

mitigations,andprovideinformationthatsupportsprioritizedresiliencydecisions.

Page7of25TLP:CLEAR

PrinciplesfortheSecureIntegrationofAIinOT

Principle1–UnderstandAI

1.1UnderstandtheUniqueRisksofAIandPotentialImpacttoOT

ThefollowingsectiondiscussesAIintegrationrisksandthepotentialimpacttoOToperations.

Table2

providesabroadoverviewofknownAIrisksthatcriticalinfrastructureownersandoperatorsshould

consider.(Note:Thisisanon-exhaustivelist;criticalinfrastructureownersandoperatorsshouldinvestigaterisksspecifictotheirorganization.)Subsequentsectionsofthisguidancediscussmitigationconsiderationsfortheserisks;seecross-referencesintheMitigationscolumnof

Table2.

Table2.AIRisksandImpactsinanOTEnvironment

AIRisksinanOTEnvironment

OTImpacts

Mitigations

CybersecurityRisks:AIdata,models,anddeploymentsoftwarecanbe

manipulatedtocauseincorrect

outcomesorbypasssecurityand

functionalsafetymeasuresor

guardrails.TraditionalcybersecurityrisksremainwithinAIsystems;assuch,securitymeasureslikeaccesscontrol,auditing,andencryptionstillapplyforsecuringAIandAI-enabledsystems.Inaddition,AI-enabled

systemsaresubjecttoAI-specificcybersecurityrisks,suchaspromptinjection.

Impactedsystemavailability,functionalsafetyrisks,

financiallosses,reputationaldamage,network/OT

compromise,cascading

compromise.

1.2UnderstandtheSecureAI

SystemDevelopmentLifecycle

2.4EvaluateChallengesinAI-OT

SystemIntegration

3.3ConductThoroughAITesting

andEvaluation

DataQuality:AImodelscanonlybeaseffectiveasthequalityoftheir

trainingdata.Collectinghigh-quality,normalizedsensordatacanbe

difficult,especiallyindistributedOTenvironments.Centralizingthis

operationaldatacreatesitsownriskasthreatactorscanuseittocreateamoretargetedengineeringimpact.

ReducedOTsafetyandsystemavailabilityfrompoordata

quality.

2.2ManageOTDataSecurityRisks

forAISystems

Page8of25TLP:CLEAR

AIRisksinanOTEnvironmentOTImpactsMitigations

AIModelDrift:AImodelsmaybecomelessaccurateovertimeduetodatabeingintroducedtothemodelthatisnotrepresentedbythemodel’sinitialtrainingdata.Alterationsto

productionprocessescanaffectmodelperformance.

Increaseddependenciesonchanges,lossofproductivity,reducedOTsafetyandsystemavailability.

4.1EstablishMonitoringand

OversightMechanismsforAIinOT

LackofExplainability:UnderstandinganAImodel’sdecision-making

processmaybedifficult;thismakesitchallengingtodiagnoseandcorrecterrorsorproperlyauditasystem.

Increasedrecoverytime,

functionalsafetyrisks,

reducedsystemavailability,complexityintroubleshooting.

1.3EducatePersonnelonAI

4.1EstablishMonitoringand

OversightMechanismsforAIinOT

OperatorCognitiveLoadand

UnnecessaryDowntime:AImay

generatealarmerrorsthatcould

causeunnecessarydowntimeor

safetyincidents.Thesealarmerrorsincreasecognitiveload,distract

operators,andpotentiallyleadtofurtherhumanerror.

Reducedsystemavailability,functionalsafetyrisks,

financiallosses,reputationaldamage.

1.3EducatePersonnelonAI

4.1EstablishMonitoringand

OversightMechanismsforAIinOT

RegulatoryCompliance:Compliancewithregulatoryrequirements,suchasthoserelatedtoOTsafetyorprivacy,canbechallengingduetothe

evolvingnatureofAI,technical

standards,andregulatory

frameworks.Forexample,while

producingarobustaudittrailofAI-drivendecision-makingmaybe

difficult,itmayberequiredforregulatorycompliance.

Functionalsafetyrisks,

financiallosses,reputationaldamage.

3.4NavigatingRegulatoryand

ComplianceConsiderationsforAI

inOT

4.1EstablishMonitoringand

OversightMechanismsforAIinOT

4.2EmbedSafetyandFailsafe

Mechanisms

AIDependency:OverrelianceonAI

canleadtooperatorsmissingcriticalsafety-relatedinformationiftheAI

missesit,andlosingvaluableskillsforsafelyoperatingequipment

manuallyorwithouttheAIfunctionality.

Dependenceontechnology,complexityintroubleshooting.

1.3EducatePersonnelonAI

Page9of25TLP:CLEAR

AIRisksinanOTEnvironment

OTImpacts

Mitigations

2.1ConsidertheOTBusinessCase

InteroperabilityIssues:IntegratingAI

forAIUse

systemswithexistingOT

2.4EvaluateChallengesinAI-OT

infrastructurecanbecomplicatedbyinteroperabilitychallenges,whichmayarisefromdifferencesinOT

Increasedmaintenancecosts,recoverychallenges.

SystemIntegration

3.1EstablishGovernance

communicationprotocolsordata

MechanismsforAIinOT

formats.

3.3ConductThoroughAITesting

andEvaluation

Complexity:IncorporatingAIusually

2.1ConsidertheOTBusinessCase

requiresincreasingthecomplexityof

Functionalsafetyrisks,

forAIUse

theoverallsystemtosupportprocess

complexityintroubleshooting.

2.4EvaluateChallengesinAI-OT

automation.

SystemIntegration

DecisionsmadebyAI

developersmayposeOTsafety

2.1ConsidertheOTBusinessCase

forAIUse

Reliability:AImaynotbereliable

andreliabilityrisks,increased

3.1EstablishGovernance

enoughtoindependentlymake

documentationcosts,

MechanismsforAIinOT

criticaldecisionsinindustrial

environments.AIcanalsohallucinate(i.e.,fabricateaplausible,butfalse,responseordata),whichwould

uncertaintyduetochangesinautomateddecision-making

overtime,increasedriskof

cascadingfailureduetotighter

3.2IntegratingAIIntoExisting

SecurityandCybersecurity

Frameworks

provideoperatorswithincorrect

couplingofactions.

3.3ConductThoroughAITesting

informationfordecision-making.As

andEvaluation

such,AIsuchasLLMsalmost

Falseinformationprovidedto

certainlyshouldnotbeusedtomake

decisionmakersposesrisksof

4.1EstablishMonitoringand

safetydecisionsforOTenvironments.

unsafeoperatingconditions,equipmentdamage,

productionhalts.

OversightMechanismsforAIinOT

4.2EmbedSafetyandFailsafe

Mechanisms

1.2UnderstandtheSecureAISystemDevelopmentLifecycle

ToaddresstheuniquechallengesofintegratingAIintoOTenvironments,criticalinfrastructureownersandoperatorsshouldverifythattheAIsystemwasdesignedsecurelyandunderstandtheirrolesand

responsibilitiesthroughtheAIsystem’slifecycle.Similartohybridownershipmodelsusedwithcloud

systems,ownersandoperatorsmustclearlydefineandcommunicatetheserolesandresponsibilitieswiththeAIsystemmanufacturer,OTsupplier,andanysystemintegratorormanagedserviceproviderroles.

Page10of25TLP:CLEAR

NCSC-UKandCISA’sjoint

GuidelinesforSecureAISystemDevelopment

emphasizesthefollowingkeystagesoftheAIsystemdevelopmentlifecycle:4

SecureDesign.DesigntheAIsystemwithsecurityconsiderationsinmindfromitsinception,includingusingrobustcoding,protocols,anddataprotectionmeasures.

SecureProcurementorDevelopment.SelectvendorswhoadheretosecurepracticesanddevelopAIsystemsusingsecuremethodologiesandtools.

SecureDeployment.DeploytheAIsystemusingmethodsthatmaintainitssecurityposture,

includingusingpropernetworksegmentationandaccesscontrol,aswellasverifyingandvalidatingthattheAIsystemworksasintended.

SecureOperationandMaintenance.EnsuretheAIsystemcontinuesoperatingsecurelythroughoutitslifecycle,includingbyimplementingregularupdatesandpatches,andmonitoringpotential

vulnerabilities.

Criticalinfrastructureownersandoperatorsshouldalsocarefullyevaluatethetrade-offsbetweendifferentmethodsforsourcinganAIsystem:

ProcureanAISystem.Selectapre-developedAIsystemfromavendorthatmeetsspecificsecurityrequirementsandthattheOTsupplieragreeswith.

DevelopanAISystem.BuildanAIsysteminhouse;thisenablescompletecontroloveritsdesignandimplementation.

CustomizeanExistingAISystem.WorkwithavendortotailortheirexistingAIsystemtomeetspecificOTsystemneeds.

Wherepossible,criticalinfrastructureownersandoperatorsshoulddemandAIsystemsthataresecurebydesignandwillnotnegativelyimpactOToperationandsafety.CriticalinfrastructureownersandoperatorsshouldconsultCISA’s

SecurebyDesign

webpageandresources,andthejointguidance

Secureby

Demand:PriorityConsiderationsforOperationalTechnologyOwnersandOperatorswhenSelectingDigital

Products

foropportunitiestoincorporatetheseprinciplesintothedesignoftheirAIandOTsystems.

1.3EducatePersonnelonAI

IntegratingAIintoOTenvironmentscanleadtopersonnelrelyingtoomuchonautomation,resultinginreducedhumanoversightandsituationalawareness.Thiscanhavesignificantconsequences,including:

DependencyRisksandSkillErosion.HeavyrelianceonAImaycauseOTpersonneltolosemanualskillsneededformanagingsystemsduringAIfailuresorsystemoutages.

SkillGaps.OTpersonnelmaymisinterpretAIoutputs,leadingtoincorrectactions;OTpersonnelmayalsolackexpertiseformanagingortroubleshootingAIsystemsiftheymalfunction.

4TheUKGovernment’s

CodeofPracticefortheCyberSecurityofAI

andits

technicalimplementationguide

alsoprovidescenario-basedcybersecuritymitigationadviceaccordingtothesecureAIsystemdevelopmentlifecycle.

PrinciplesfortheSecureIntegrationofArtificialIntelligenceinOperationalTechnologyTLP:CLEAR

CISA|ASD’sACSC|NSAAISC|FBI|CyberCentre|BSI|NCSC-NL|NCSC-NZ|NCSC-UK

Page11of25TLP:CLEAR

Criticalinfrastructureownersandoperatorsmaymitigatetheserisksbyfocusingonskilldevelopmentandcross-disciplinarycollaboration,suchas:

TrainingOTteamsonAIfundamentalsandthreatmodelingsoteamscaneffectivelyinterpretandvalidateAIoutputsandmaintainoperationalcompetenciesalongsideAIsystems—forexample,

trainingteamstousealternativesensors(e.g.,humansens

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論