Pytorch常用函數(shù)手冊_第1頁
Pytorch常用函數(shù)手冊_第2頁
Pytorch常用函數(shù)手冊_第3頁
Pytorch常用函數(shù)手冊_第4頁
Pytorch常用函數(shù)手冊_第5頁
已閱讀5頁,還剩160頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

目 第一章 張量 創(chuàng)建操作Creation 索引,切片,連接,換位Indexing,Slicing,Joining,Mutating 隨機抽樣Random 序列化 并行化 數(shù)學(xué)操作Math Reduction 比較操作Comparison 其它操作Other BLASandLAPACK 第二章 第三章 第四章 第五章 第六章 第七章 第八章Automaticdifferentiationpackage- 第九章 第十章 第十一章 第十二章 第十三章 第十四章 torch包含了多維張量的數(shù)據(jù)結(jié)構(gòu)以及基于其上的多種數(shù)學(xué)操作。另外,它也提供了多它有CUDANVIDIAGPU上進行張量運算(計算能力>=2.0)objpytorchTrueobj(Object)–判斷對象objpytorchstorageinputObject)–torch. 張量中的元素個數(shù):input(Tensor)>>>a=>>>torch.numel(a)>>>a=>>>torch.numel(a) edgeitems=None,linewidth=None,profile=None)Numpyprecision–(8threshold–閾值,觸發(fā)匯總顯示而不是完全顯示(repr)(edgeitems–匯總顯示中,每維(軸)兩端顯示的項數(shù)(linewidth–用于插入行間隔的每行字符數(shù)(80)。Thresholdedmatricieswillthisprofilepretty(shortCreationtorch.eye(n,m=None,nint)–mintoptional)–列數(shù).如果為None,out(Tensor,optinal)-Output返回值:102維張量返回值類型Tensor>>>100100[torch.FloatTensorofsizetorch.from_numpy(ndarray)→Numpynumpy.ndarraypytorchTensorndarray共享同一內(nèi)存空間。修改一個會導(dǎo)致另外一個也被修改。返回的張量不能改變大>>>a=numpy.array([1,2,>>>t=>>>torch.LongTensor([1,2,>>>t[0]=->>>array([-1,2,torch.linspace(start,end,steps=100,out=None)→1startend上均勻間隔的step1維張量start(float)–end(float)–stepsint)–startendoutTensoroptional)–>>>torch.linspace(3,10,[torch.FloatTensorofsize>>>torch.linspace(-10,10,--[torch.FloatTensorofsize>>>torch.linspace(start=-10,end=10,--[torch.FloatTensorofsizetorch.logspace(start,end,steps=100,out=None)→1量的長度為stepsstartfloat–endfloat–stepsint–startendoutTensoroptional)–結(jié)果張量>>>torch.logspace(start=-10,end=10,[torch.FloatTensorofsize>>>torch.logspace(start=0.1,end=1.0,[torch.FloatTensorofsizetorch.ones(*sizes,out=None)→1sizessizesint...)–outTensoroptional)–>>>torch.ones(2,1111[torch.FloatTensorofsize>>>[torch.FloatTensorofsizetorch.rand(*sizes,out=None)→返回一個張量,包含了從區(qū)間[0,1)的均勻分布中抽取的一組隨機數(shù),形狀由可變參數(shù)sizessizesint...)–outTensoroptinal)->>>[torch.FloatTensorofsize>>>torch.rand(2, [torch.FloatTensorofsizetorch.randn(*sizes,out=None)→返回一個張量,包含了從標準正態(tài)分布(01,即高斯白噪聲)中抽取一組隨sizessizesint...)–outTensoroptinal)->>>--[torch.FloatTensorofsize>>>torch.randn(2,1.43390.3351-1.5458-0.9643-[torch.FloatTensorofsizetorch.randperm(n,out=None)→給定參數(shù)n0n-1nint)–上邊界(不包含>>>[torch.LongTensorofsizetorch.arange(start,end,step=1,out=None)→1floor((end?start)/step)startendstep為步長start(float)–end(float)–step(float)–outTensoroptional)–>>>torch.arange(1,[torch.FloatTensorofsize>>>torch.arange(1,2.5,[torch.FloatTensorofsizetorch.range(start,end,step=1,out=None)→1floor((end?start)/step)+1個元素。包含在半開區(qū)間[start,end)startstart(float)–end(float)–stepint)––>>>torch.range(1,[torch.FloatTensorofsize>>>torch.range(1,4,[torch.FloatTensorofsizetorch.zeros(*sizes,out=None)→0sizessizesint...)–outTensoroptional)–>>>torch.zeros(2,0000[torch.FloatTensorofsize>>>[torch.FloatTensorofsize索引,切片,連接,換位IndexingSlicingJoiningMutatingtorch.cat(inputs,dimension=0)→torch.cat()可以看做torch.split()torch.chunk()cat()函數(shù)inputssequenceofTensors)–可以是任意相同Tensorpythondimensionintoptional)–>>>x=torch.randn(2,>>>0.5983-0.03411.5981-0.5265-[torch.FloatTensorofsize>>>torch.cat((x,x,x),0.5983-0.03411.5981-0.5265-0.5983-0.03411.5981-0.5265-0.5983-0.03411.5981-0.5265-[torch.FloatTensorofsize>>>torch.cat((x,x,x),0.5983-0.03412.49180.5983-0.03412.49180.5983-1.5981-0.5265-0.87351.5981-0.5265-0.87351.5981--[torch.FloatTensorofsizetorch.chunk(tensor,chunks,tensorTensor)–chunksint)–dimint)–torch.gather(inputdimindexout=None)Tensordimindex指定位置的值進行聚合。3維張量,輸出可以定義為:out[i][j][k]=tensor[index[i][j][k]][j][k]#out[i][j][k]=tensor[i][index[i][j][k]][k]#out[i][j][k]=tensor[i][j][index[i][j][k]]#>>>t=>>>torch.gather(t,1, [torch.FloatTensorofsizeinputTensor)–dimint)–indexLongTensor)–outTensoroptional)–torch.index_select(input,dim,index,out=None)→_Tensor_有相同的維度(在指定軸上)。注意:返回的張量不與原始張量共享內(nèi)存空間。input(Tensor)dimint)–indexLongTensor)–outTensoroptional)–>>>x=torch.randn(3,>>>1.20452.40840.40010.55961.56770.6219-1.3635-1.2313-0.5414-[torch.FloatTensorofsize>>>indices=torch.LongTensor([0,>>>torch.index_select(x,0,1.20452.40840.40011.3635-1.2313-0.5414-[torch.FloatTensorofsize>>>torch.index_select(x,1,1.20450.55961.3635-[torch.FloatTensorofsizetorch.masked_select(input,mask,out=None)→mask中的二元值,取輸入張量中的指定項mask為一個ByteTensor),將取1Dinput(Tensor)maskByteTensor)–outTensoroptional)–>>>x=torch.randn(3,>>>1.20452.40840.40010.55961.56770.6219-1.3635-1.2313-0.5414-[torch.FloatTensorofsize>>>indices=torch.LongTensor([0,>>>torch.index_select(x,0,1.20452.40840.40011.3635-1.2313-0.5414-[torch.FloatTensorofsize>>>torch.index_select(x,1,1.20450.55961.3635-[torch.FloatTensorofsizetorch.nonzero(input,out=None)→中非零元素索引的張量。輸出張量中的每行包含輸入中非零元素有n維,則output的形狀zxn,z inputTensor)–outLongTensoroptional)–>>>>>>torch.nonzero(torch.Tensor([1,1,1,0,[torch.LongTensorofsize>>>torch.nonzero(torch.Tensor([[0.6,0.0,0.0, [0.0,0.4,0.0, [0.0,0.0,1.2, [0.0,0.0,0.0,-0123[torch.LongTensorofsizetorch.split(tensor,split_size,tensorTensor)–split_size(int)–dimint)–torch.squeeze(input,dim=None,當給定dim為:(A×1×B),squeeze(input,0)將squeeze(input1),形狀會變成(A×B)input(Tensor)dimintoptional)–只會在outTensoroptional)–>>>x=>>>(2L,1L,2L,1L,>>>y=>>>y.size()(2L,2L,>>>y=torch.squeeze(x,>>>(2L,1L,2L,1L,>>>y=torch.squeeze(x,>>>y.size()(2L,2L,1L,2L)torch.stack(sequence,sqequenceSequence)–dimint)–0torch.t(input,out=None)→input(Tensor)outTensoroptional)–>>>x=torch.randn(2,>>>0.48340.6907-0.13000.5295[torch.FloatTensorofsize>>>0.4834-0.69071.3417[torch.FloatTensorofsizetorch.transpose(input,dim0,dim1,out=None)→的轉(zhuǎn)置dim0dim1input(Tensor)dim0int)–dim1int)–>>>x=torch.randn(2,>>>0.5983-0.03411.5981-0.5265-[torch.FloatTensorofsize>>>torch.transpose(x,0,0.5983-0.0341-2.4918-[torch.FloatTensorofsizetorch.unbind(tensor,tensor(Tensor)dimint)–torch.unsqueeze(input,dim,dimdim+input.dim()+1tensor(Tensor)dimint)––>>>x=torch.Tensor([1,2,3,>>>torch.unsqueeze(x,123[torch.FloatTensorofsize>>>torch.unsqueeze(x,[torch.FloatTensorofsizeRandomseedintorlong)–種子.返回生成隨機數(shù)的原始種子值(pythonlong)。new_statetorch.ByteTensor)–torch.default_generator=<torch._C.Generatorobject>torch.bernoulli(input,out=None)→從伯努利分布中抽取二元隨機數(shù)(01)[0,1]ii1。01:––輸出張量(可選>>>a=torch.Tensor(3,3).uniform_(0,1)#generateauniformrandommatrixwithrange[0,1]>>> [torch.FloatTensorofsize>>> [torch.FloatTensorofsize>>>a=torch.ones(3,3)#probabilityofdrawing"1"is>>> [torch.FloatTensorofsize>>>a=torch.zeros(3,3)#probabilityofdrawing"1"is>>> [torch.FloatTensorofsizetorch.multinomial(input,num_samples,replacement=False,out=None)→相應(yīng)行中定義的多項分布中抽取的num_samples個樣[注意]:每行1(這里我們用來做權(quán)重),但是必須非負且總和不0。是一out也是一個相同長度num_samples的向moutm×n的矩陣。replacement為True,則樣本抽取可以重復(fù)。否則,一個樣本在每行不能被重復(fù)抽num_samples必須(如果)–num_samplesint)–replacementbooloptional)––>>>weights=torch.Tensor([0,10,3,0])#createaTensorof>>>torch.multinomial(weights,[torch.LongTensorofsize>>>torch.multinomial(weights,4,[torch.LongTensorofsize4]torch.normal(means,std,means,stdmeans是一meansTensor)–stdTensor)–outTensor)–[torch.FloatTensorofsize10]torch.normal(mean=0.0,std,out=None)meansTensor,optional)–stdTensor)–outTensor)–可選的輸出張量>>>torch.normal(mean=0.5,std=torch.arange(1,[torch.FloatTensorofsizetorch.normal(means,std=1.0,meansTensor)–std(floatoptional)–outTensor)–可選的輸出張量[torch.FloatTensorofsizeRecommendedapproachforsavingaobj–f(返回文件描述符)pickle_module–picklingpickle_protocol–pickleprotocalmap_location動態(tài)地進行內(nèi)存重映射,使其能從不動設(shè)備中讀取文件。一般調(diào)用時,需兩個參數(shù):storage和locationtag.storageNone(此時地址可以通過默認方法進行解析).認情況下,locationtags中"cpu"hosttensors,‘cuda:device_ide.g.‘cuda:2’)cudaf–(返回文件描述符)map_location–remappickle_module–unpickling(pickle_module>>>#Loadalltensorsontothe>>>torch.load('tensors.pt',map_location=lambdastorage,loc:storage)#MaptensorsfromGPU1toGPU0數(shù)學(xué)操作Mathtorch.abs(input,out=None)→outTensoroptional)–>>>a=>>>>>> torch.add(input,value,FloatTensororDoubleTensorvalue必須為實數(shù),否則須為整數(shù)?!咀g注:似乎并非如此,無關(guān)輸入類型,value取整數(shù)、實數(shù)皆可?!縱alue(Number)––>>>a=>>>>>>torch.add(a,[torch.FloatTensorofsize4]torch.add(input,value=1,other,out=None)otherFloatTensororDoubleTensorvalue必須為實數(shù),否則須為整數(shù)?!咀g注:似乎并非如此,無關(guān)輸入類型,value取整數(shù)、實數(shù)皆可。】–value(Number)–otherTensor)––>>>import>>>a=>>>>>>b=torch.randn(2,>>>1.0663-0.1513>>>torch.add(a,10,用tensor2對tensor1逐元素相除,然后乘以標量值value并加到。 tensorTensor)–tensor1tensorvalue(Numberoptional)–tensor1tensor2tensor1Tensor)–張量,作為被除數(shù)(分子(–>>>t=torch.randn(2,>>>t1=torch.randn(1,>>>t2=torch.randn(6,>>>torch.addcdiv(t,0.1,t1,0.0122-0.0188-0.7396-1.5721。張FloatTensororDoubleTensorvalue必tensorTensor)–tensor1tensorvalue(Numberoptional)–tensor1tensor2tensor1Tensor)––>>>t=torch.randn(2,>>>t1=torch.randn(1,>>>t2=torch.randn(6,>>>torch.addcmul(t,0.1,t1,0.0122-0.0188-0.7396-1.5721torch.asin(input,out=None)→–>>>a=>>>>>> torch.atan(input,out=None)→–>>>a=>>>>>>torch.atan2(input1,input2,out=None)→input1Tensor)–input2Tensor)––>>>a=>>>>>>torch.atan2(a,torch.ceil(input,out=None)→張量每個元素向上取整,即取不小于每個元素的最小整數(shù),并返回結(jié)果到輸–>>>a=>>>>>>torch.clamp(input,min,max,out=None)→ |min,ifx_i<y_i=|x_i,ifmin<=x_i<= |max,ifx_i>FloatTensororDoubleTensorminmax必須為實數(shù),否則須為整數(shù)?!咀g注:似乎并非如此,無關(guān)輸入類型,minmax取整數(shù)、實數(shù)皆可?!縨inNumber)–maxNumber)––>>>a=>>>>>>torch.clamp(a,min=-0.5,[torch.FloatTensorofsize4]torch.clamp(input,*,min,out=None)→Tensorvalue(Number)––>>>a=>>>>>>torch.clamp(a,torch.clamp(input,*,max,out=None)→FloatTensororDoubleTensormax必須為實數(shù),否則須為整數(shù)。【譯注:似乎并非如此,無關(guān)輸入類型,max取整數(shù)、實數(shù)皆可?!縱alue(Number)––>>>a=>>>>>>torch.clamp(a,torch.cos(input,out=None)→–>>>a=>>>>>>–>>>a=>>>>>>torch.div(input,value,out=tensor/valueFloatTensororDoubleTensorvalue必須為實數(shù),否則須為整數(shù)。【譯注:似乎并非如此,無關(guān)輸入類型,value取整數(shù)、實數(shù)皆可。】value(Number)––>>>a=>>>>>>torch.div(a,[torch.FloatTensorofsize5]torch.div(input,other,out=None)和other逐元–張量(分子otherTensor)–張量(分母–>>>a=>>>-0.18100.40170.2863-0.61832.06960.9012-0.56790.4743-0.0117--0.12130.96290.2682>>>b=torch.randn(8,>>>0.87740.8866-0.64901.4259-1.4633-0.4643-0.34921.6103->>>torch.div(a,-0.20620.52510.3229--0.95281.85250.63200.3881-3.8625-0.0253-0.34730.63060.1666-[torch.FloatTensorofsize–>>>torch.exp(torch.Tensor([0,math.log(2)]))torch.FloatTensor([1,2])torch.floor(input,out=None)→:–>>>a=>>>>>>-–divisorTensororfloat)–除數(shù),一個數(shù)或與被除數(shù)相同類型的張量-–輸出張量>>>torch.fmod(torch.Tensor([-3,-2,-1,1,2,3]),torch.FloatTensor([-1,-0,-1,1,0,>>>torch.fmod(torch.Tensor([1,2,3,4,5]),參考torch.remainder(),python%torch.lerp(start,end,weight,start,endstartTensor)–endTensor)–weightfloat)––>>>start=torch.arange(1,>>>end=>>>>>>[torch.FloatTensorofsize>>>torch.lerp(start,end,torch.log(input,out=None)→–>>>a=>>>>>> –>>>a=>>>>>>torch.mul(input,value,FloatTensororDoubleTensorvalue必須為實數(shù),否則須為整數(shù)?!咀g注:似乎并非如此,無關(guān)輸入類型,value取整數(shù)、實數(shù)皆可?!縱alue(Number)––>>>a=>>>>>>torch.mul(a,[torch.FloatTensorofsize3]torch.mul(input,other,out=None)注意:當形狀不匹配時,的形狀作為輸入張–otherTensor)––>>>a=>>>-0.72800.0598-1.4327--0.1427-0.06900.0821--0.92410.51100.4070--0.83080.7426-0.6240->>>b=torch.randn(2,>>>0.0430-1.07750.60151.1647-0.65490.0308-0.1670-1.25930.0292-0.08490.45301.2404-0.4659-0.1840>>>torch.mul(a,-0.0313-0.0645-0.8618-0.0934-0.0021-0.0137-1.16380.0149-0.0346--1.0304-0.34600.1148-–>>>a=>>>>>>的按exponent次冪exponent可以為單float相同元素數(shù)的張量。exponentfloatorTensor)––>>>a=>>>>>>torch.pow(a,>>>exp=torch.arange(1,>>>a=torch.arange(1,>>>>>>>>>torch.pow(a, [torch.FloatTensorofsize4]torch.pow(base,input,out=None)base為標量浮點值,為張out與輸basefloat)–inputTensor)––>>>exp=torch.arange(1,>>>base=>>>torch.pow(base,–>>>a=>>>>>>張量–divisorTensororfloat)––>>>torch.remainder(torch.Tensor([-3,-2,-1,1,2,3]),torch.FloatTensor([1,0,1,1,0,>>>torch.remainder(torch.Tensor([1,2,3,4,5]),參考:torch.fmod()C–>>>a=>>>>>>–>>>a=>>>>>> –>>>a=>>>>>>[torch.FloatTensorofsize–>>>a=>>>>>>torch.sin(input,out=None)→–>>>a=>>>>>>–>>>a=>>>>>>torch.sqrt(input,out=None)→–>>>a=>>>>>> torch.tan(input,out=None)→–>>>a=>>>>>>–>>>a=>>>>>>–>>>a=>>>>>>Reductiontorch.cumprod(input,dim,out=None)→yi=x1?x2?x3?...?xidimint)––>>>a=>>>>>>a[5]=torch.cumsum(input,dim,out=None)→yi=x1+x2+x3+...+xidimint)––>>>a=>>>torch.dist(input,other,p=2,out=None)→(other)的potherTensor)–pfloat,optional)––>>>x=>>>>>>y=>>>>>>torch.dist(x,y,3.5)>>>torch.dist(x,y,3)>>>torch.dist(x,y,0)>>>torch.dist(x,y,1)torch.mean(input)→>>>a=torch.randn(1,>>>-0.2946-0.9143>>>torch.mean(a)torch.mean(input,dim,out=None)→Tensordim上每行的均值。dim(int)–thedimensionto–>>>a=torch.randn(4,>>>-1.2738-0.30580.1230-0.8771-0.5430-0.92331.41070.0317-0.6823-1.38540.4953-0.2160>>>torch.mean(a,torch.median(input,dim=-1,values=None,indices=None)->(Tensor,:dimint)–valuesTensoroptional)–indicesTensoroptional)–>>>-0.6891-0.26970.5254-0.5528->>>a=torch.randn(4,>>>0.4056-0.33721.0973-2.48842.13360.38410.1404-0.1821--0.24031.3975-2.00680.1298-1.5371-0.7257-0.4871-0.2359->>>torch.median(a,1)dimLongTensor,包含眾數(shù)職的索引。dim值默認:dimint)–valuesTensoroptional)–indicesTensoroptional)–>>>-0.6891-0.26970.5254-0.5528->>>a=torch.randn(4,>>>0.4056-0.33721.0973-2.48842.13360.38410.1404-0.1821--0.24031.3975-2.00680.1298-1.5371-0.7257-0.4871-0.2359->>>torch.mode(a,1)torch.norm(input,p=2)→ppfloat,optional)–>>>a=torch.randn(1,>>>-0.4376-0.5328>>>torch.norm(a,torch.norm(input,p,dim,out=None)→dimppfloat)–dimint)––>>>a=torch.randn(4,>>>-0.6891-0.26970.5254-0.5528->>>torch.norm(a,2,>>>torch.norm(a,0,d(input)→>>>a=torch.randn(1,>>>0.61700.3546>>>d(a)d(input,dim,out=None)→dimint)––>>>a=torch.randn(4,>>>0.1598--0.1831--0.9925--0.2416->>>d(a,torch.std(input)→->>>a=torch.randn(1,>>>-1.30631.4182->>>torch.std(a)torch.std(input,dim,out=None)→dimint)––>>>a=torch.randn(4,>>>0.1889-2.48560.0043-0.7701-0.4682-2.24100.1919-1.1856-1.03610.01731.06620.2143->>>torch.std(a,torch.sum(input)→>>>a=torch.randn(1,>>>0.61700.3546>>>torch.sum(a)torch.sum(input,dim,out=None)→dimint)––>>>a=torch.randn(4,>>>-0.46400.06090.1122-1.30631.64430.4714--1.3561-0.19591.0609-2.68330.5746-0.5709->>>torch.sum(a,torch.var(input)→>>>a=torch.randn(1,>>>-1.30631.4182->>>torch.var(a)torch.var(input,dim,out=None)→dim(int)–thedimensionto–>>>a=torch.randn(4,>>>-1.2738-0.30580.1230-0.8771-0.5430-0.92331.41070.0317-0.6823-1.38540.4953-0.2160>>>torch.var(a,Comparisontorch.eq(input,other,out=None)→–otherTensororfloat)––ByteTensoror10torch.equal(tensor1,tensor2)→torch.ge(input,other,out=None)→–otherTensororfloat)–float–ByteTensor或者與第torch.ByteTensor張量,包含了每個位置的比較結(jié)果(inputother回類型:Tensor10torch.gt(input,other,out=None)→,即–otherTensororfloat)–float–ByteTensor或者與第torch.ByteTensor張量,包含了每個位置的比較結(jié)果(inputother回類型:Tensor00(values,indices)indices中沿dim維的k個最小值–kint)–kdimint,optional)–outtuple,optional)–(TensorLongTensor)>>>x=torch.arange(1,>>>>>>torch.kthvalue(x,4)otherinput<=other第二個參數(shù)可以為一個數(shù)或與第一個參數(shù)–otherTensororfloat)–float–ByteTensor或者與第torch.ByteTensor張量,包含了每個位置的比較結(jié)果(inputother回類型:Tensor11torch.lt(input,other,out=None)→otherinput<other第二個參數(shù)可以為一個數(shù)或與第一個參數(shù)–otherTensororfloat)–float–ByteTensor或者與第torch.ByteTensor張量,包含了每個位置的比較結(jié)果(tensorother回類型:Tensor01>>>a=torch.randn(1,>>>0.4729-0.2266->>>dim1,其它與輸入形狀保持一致。dimint)––,–>>a=torch.randn(4,>>0.06920.31421.2513-0.92880.8552-0.20731.0695-0.0101-2.4507-0.7426-0.76660.4862->>>torch.max(a,1)otherTensor)––>>>a=>>>>>>b=>>>>>>torch.max(a,torch.min(input)→:input(Tensor)>>>a=torch.randn(1,>>>0.4729-0.2266->>>dim1,其它與輸入形狀保持一致。dimint)––(,–>>a=torch.randn(4,>>0.06920.31421.2513-0.92880.8552-0.20731.0695-0.0101-2.4507-0.7426-0.76660.4862->>torch.min(a,torch.LongTensorofsize4x1]torch.min(input,other,out=None)→Tensor注意:當形狀不匹配時,的形狀作為返回張量的形狀。otherTensor)––>>>a=>>>>>>b=>>>>>>torch.min(a,torch.ne(input,other,out=None)→otherinput!=other–otherTensororfloat)–float–ByteTensortorch.ByteTensor(tensorotherTrue1)。Tensor01torch.sort(input,dim=None,descending=False,out=None)->(Tensor,(sorted_tensorsorted_indices)sorted_indices–dimint,optional)–descendingbool,optional)–outtuple,optional)–ByteTensor相同類>>>x=torch.randn(3,>>>sorted,indices=>>>-1.67470.06100.1190-1.47820.71591.0341-0.3324-0.07820.3518>>>013210310>>>sorted,indices=torch.sort(x,>>>-1.6747-0.0782-1.4782-0.35180.06100.47631.03410.71591.4137>>>021202110torch.topk(input,k,dim=None,largest=True,sorted=True,out=None)->(Tensor,LongTensor)dim中kdim,則largestFalsek(values,indices)indices中測sorted為_True_k個值被排序。kint)“top-k”dimint,optional)–largestbool,optional)–sortedbool,optional)–outtuple,optional)–(TensorLongTensoroutput>>>x=torch.arange(1,>>>>>>torch.topk(x,3)>>>torch.topk(x,3,0,largest=False)Othertorch.cross(input,other,dim=-1,out=None)→dim和other的向量積(叉積)other必須有相dimsize3。otherTensor)–dimint,optional)–outTensor,optional)–>>>a=torch.randn(4,>>>0.22860.4446-0.04760.23210.61991.1924->>>b=torch.randn(4,>>>-0.1042-1.11560.99470.1149-1.01080.8319-0.9045-1.3754>>>torch.cross(a,b,-0.96190.20090.2696-0.6318--1.6805-2.01710.0163-1.5304->>>torch.cross(a,-0.96190.20090.2696-0.6318--1.6805-2.01710.0163-1.5304-torch.diag(input,diagonal=0,out=None)→diagonal0,diagonal0,diagonal0,diagonal(int,optional)––>>>a=>>>>>>1.04800.00000.0000-2.34050.00000.0000->>>torch.diag(a,0.00001.04800.00000.00000.0000-2.34050.00000.00000.0000-0.00000.00000.0000>>>a=torch.randn(3,>>>-1.5328-1.3210-0.85960.0471--0.66170.0146->>>torch.diag(a,>>>torch.diag(a,torch.histc(input,bins=100,min=0,max=0,out=None)→minmaxrangebins個直條,然后將排序好的數(shù)據(jù)劃分到各個直條(bins)minmax0,則利用數(shù)據(jù)中的最大最小值作為邊界。binsint)–bins(直條)的個數(shù)(100個minint)range的下邊界(包含maxint)range的上邊界(包含–>>>torch.histc(torch.FloatTensor([1,2,1]),bins=4,min=0,max=3)FloatTensor([0,2,1,0])torch.renorm(input,p,dim,maxnorm,out=None)→dimpmaxnorm。如果pmaxnorm,則當前子張量不需要修改。注意:torch7Hintonetal.2012,p.pfloat)–dimint)–maxnormfloat)––>>>x=torch.ones(3,>>>>>>>>>112233>>>torch.renorm(x,1,0,1.00001.00001.66671.66671.66671.6667torch.trace(input)→>>>x=torch.arange(1,10).view(3,>>>124578>>>torch.trace(x)torch.tril(input,k=0,out=None)→out,包含輸入矩陣(2D張量)的下三角部分,out0。這里所說的下三diagonal之上的元素。k控制對角線k0k0k0kint,optional)––>>>a=>>>1.32251.73041.24690.0064->>>1.32250.0000-0.3052-0.31111.24690.0064->>>torch.tril(a,1.32251.73041.24690.0064->>>torch.tril(a,k=-0.00000.0000-0.30520.00001.24690.0064torch.triu(input,k=0,out=None)→diagonal之上的元素。k控制對角線k0k0k0kint,optional)––>>>a=>>>1.32251.73041.24690.0064->>>1.32251.73040.00000.0000->>>torch.triu(a,0.00001.73040.00000.0000-0.00000.0000>>>torch.triu(a,k=-1.32251.73040.00000.0064-BLASandLAPACKtorch.addbmm(beta=1,mat,alpha=1,batch1,batch2,out=None)→batch1batch2reducedadd步驟(所有矩陣乘結(jié)果沿著第一維相加)matbatch1batch23batch1b×n×m的張量,batch1b×m×poutmat的形狀n×pres=(beta?M)+(alpha?sum(batch1i@batch2i,i=0,b))FloatTensorDoubleTensor的輸入,alphaandbeta必須為實數(shù),否則兩個參數(shù)須為beta(Numberoptional)–matmatTensor)–alpha(Numberoptional)–batch1@batch2batch1@batch2batch1Tensor)–batch2Tensor)––輸出張量>>>M=torch.randn(3,>>>batch1=torch.randn(10,3,>>>batch2=torch.randn(10,4,>>>torch.addbmm(M,batch1,-3.1162 0.1824- 0.4589-0.5641--9.3387-0.1794-1.2318-6.8841-torch.addmm(beta=1,mat,alpha=1,mat1,mat2,out=None)→mat1mat2matmat1n×m張量,mat2m×poutmatn×palphabeta分別是兩個矩mat1@mat2mat的比例因子,F(xiàn)loatTensorDoubleTensor的輸入,betaandalpha必須為實數(shù),否則兩個參數(shù)須為beta(Numberoptional)–matmatTensor)–alpha(Numberoptional)–mat1@mat2mat1@mat2mat1Tensor)–mat2Tensor)––>>>M=torch.randn(2,>>>mat1=torch.randn(2,>>>mat2=torch.randn(3,>>>torch.addmm(M,mat1,-0.4095-1.97035.7674-4.9760torch.addmv(beta=1,tensor,alpha=1,mat,vec,out=None)→矩陣,vecmmoutmatn_alpha_beta分別mat?vec和matout=(beta?tensor)+(alpha?(mat@vec))beta(Numberoptional)–matmatTensor)–alpha(Numberoptional)–mat1@vecmatTensor)–vecTensor)––>>>M=>>>mat=torch.randn(2,>>>vec=>>>torch.addmv(M,mat,torch.addr(beta=1,mat,alpha=1,vec1,vec2,out=None)→vec2mmatn×m_beta_alpha分別是兩個matmatvec1@vec2的比例因子,即,beta(Numberoptional)–matmatTensor)–alpha(Numberoptional)–vec1,vec2vec1,vec2vec1Tensor)–vec2Tensor)––>>>vec1=torch.arange(1,>>>vec2=torch.arange(1,>>>M=torch.zeros(3,>>>torch.addr(M,vec1,123torch.baddbmm(beta=1,mat,alpha=1,batch1,batch2,out=None)→batch1batch2matbatch1batch23batch1b×n×m的張量,batch1b×m×poutmatn×p,beta(Numberoptional)–matmatTensor)–alpha(Numberoptional)–batch1@batch2batch1@batch2batch1Tensor)–batch2Tensor)––>>>M=torch.randn(10,3,>>>batch1=torch.randn(10,3,>>>batch2=torch.randn(10,4,>>>torch.baddbmm(M,batch1,batch2).size()torch.Size([10,3,5])torch.bmm(batch1,batch2,out=None)→batch1batch2內(nèi)的矩陣進行批矩陣乘操作。batch1batch2都為包含相同outmatn×p,F(xiàn)loatTensorDoubleTensor的輸入,alphaandbeta必須為實數(shù),否則兩個參數(shù)須為batch1Tensor)–batch2Tensor)––>>>batch1=torch.randn(10,3,>>>batch2=torch.randn(10,4,>>>res=torch.bmm(batch1,>>>res.size()torch.Size([10,3,5])LUpivotsinfominibatch樣本進行分解。infoarefromdgetrfandanon-zerovalueindicatesanerroroccurred.CUDA的話,這CUBLASLAPACK。ATensor)–>>>A=torch.randn(2,3,>>>A_LU=torch.btrisolve(b,LU_data,LU_pivots)→Ax=bLUbTensorRHS張量LU_data(Tensor)–PivotedLUfactorizationofAfromLU_pivotsIntTensorLU>>>A=torch.randn(2,3,>>>b=torch.randn(2,>>>A_LU=>>>x=>>>torch.norm(A.bmm(x.unsqueeze(2))-b)torch.dot(tensor1,tensor2)→aTensor)–(–outtuple,optional)–輸出元組返回值:元組,包括:eTensor)avTensor):eigenvectorsTrue,則為包含特征向量的張量;否則為空張量返回值類型:(Tensor,Tensor)torch.gels(B,A,out=None)→m×nam>=n,gels對最小二乘minimize∥X∥FsubjectXXnn行包含解。余下的行包含以下殘差信息:n行開始計算的每列的歐(mBTensor)–ATensorm×nm×nouttuple,optional)–輸出元組返回值:元組,包括:XTensor):qrTensor)QR分解的細節(jié) [2,3, [3,5, [4,2, [5,4,>>>B=torch.Tensor([[-10,- [12, [14, [16, [18,>>>X,_=torch.gels(B,>>>2.00001.00001.0000QRQ,RLAPACKRatherthisdirectlycallstheunderlyingLAPACKfunction?geqrfwhichproducesasequenceof‘elementary–outtuple,optional)–(Tensortorch.ger(vec1,vec2,out=None)→vec1,vec2vec1n,vec2moutnxm的矩陣。vec1Tensor1Dvec2Tensor1Douttuple,optional)–>>>v1=torch.arange(1,>>>v2=torch.arange(1,>>>torch.ger(v1, 8m×mXm×kBTensorm×km×kATensorm×mm×m>>>A=torch.Tensor([[6.80,-2.11,5.66,5.97, [-6.05,-3.30,5.36,-4.44, [-0.45,2.58,-2.70,0.27, [8.32,2.71,4.35,-7.17, [-9.67,-5.14,-7.26,6.08,->>>B=torch.Tensor([[4.02,6.19,-8.22,-7.57,- [-1.56,4.00,-8.67,1.75, [9.81,-4.09,-4.57,-8.61,>>>X,LU=torch.gesv(B,>>>torch.dist(B,torch.mm(A,X))Irrespectiveoftheoriginalstrides,thereturnedmatrixwillbetransposed,i.e.withstridesm)insteadof(m,–2–輸出張量>>>x=torch.rand(10,>>>0.78000.22670.78550.94790.59140.71190.44370.91310.12890.00450.04250.22290.46260.62100.02070.63380.70670.63810.83500.78100.85260.93640.75040.27370.06940.58990.85160.62800.60160.53570.29360.78270.27720.07440.26270.63260.78970.02260.31020.01980.94150.98960.35280.93970.20740.52350.61190.65220.33990.32050.55550.84540.37920.49270.10480.03280.57340.63180.98020.44580.09790.33200.37010.26160.34850.43700.56200.52910.82950.76930.18070.06500.16550.21920.69130.00930.01780.30640.67150.51010.25610.43700.46950.83330.11800.42660.41610.06990.42630.8865>>>x=torch.rand(10,>>>y=>>>z=torch.mm(x,>>>1.00000.00000.0000-0.00000.00000.00000.00000.0000-0.0000-0.00001.0000-0.00000.00000.00000.0000-0.0000-0.0000-0.0000-0.00000.00001.0000-0.0000-0.00000.00000.00000.0000-0.0000-0.00000.00000.00001.00000.00000.00000.0000-0.0000-0.00000.00000.0000-0.0000-0.00001.00000.00000.0000-0.0000-0.0000-0.00000.00000.0000-0.00000.00001.0000-0.0000-0.0000-0.0000-0.00000.00000.0000-0.00000.00000.00001.00000.0000-0.00000.00000.0000-0.0000-0.00000.00000.0000-0.00001.0000-0.0000-0.00000.0000-0.0000-0.00000.00000.0000-0.0000-0.00001.0000--0.00000.0000-0.0000-0.0000-0.00000.0000-0.0000-0.00000.0000>>>torch.max(torch.abs(z-torch.eye(10)))#Maxnonzerotorch.mm(mat1,mat2,out=None)→mat1mat2mat1n×m張量,mat2m×p張n×pout。mat1Tensor)–mat2Tensor)––輸出張量>>>mat1=torch.randn(2,>>>mat2=torch.randn(3,0.0519-0.33044.3910-5.1498torch.mv(mat,vec,out=None)→matvecmatn×m張量,vecm1維張量,將n元1維張量。matTensor)–vecTensor)––輸出張量>>>mat=torch.randn(2,>>>vec=>>>torch.mv(mat,QRq,rx=q?rqr是一個Irrespectiveoftheoriginalstrides,thereturnedmatrixqwillbetransposed,i.e.withstrides(1,m)insteadof(m,1).–2outtuple,optional)–tupleQ>>>a=torch.Tensor([[12,-51,4],[6,167,-68],[-4,24,->>>q,r=>>>-0.85710.3943-0.4286-0.9029-0.2857-0.1714>>>-14.0000- 0.0000- 0.0000->>>torch.mm(q,12- 6167- 24-1-0-0100U,S,V=torch.svd(A)。返回對形如n×mAA=USV'?Un×n,Sn×m,Vm×msomesome=Trueitcomputessomeandsome=Falsecomputesall.Irrespectiveoftheoriginalstrides,thereturnedmatrixUwillbetransposed,i.e.withstrides(1,n)insteadof(n,1).–2some(bool,optional)–outtuple,optional)–>>>a=torch.Tensor([[8.79,6.11,-9.15,9.57,-3.49, [9.93,6.91,-7.93,1.64,4.02, [9.83,5.04,4.86,8.83,9.80,- [5.45,-0.27,4.85,0.74,10.00,- [3.16,7.98,3.01,5.80,4.27,->>> 5.0400- -9.1500- 9.8000 0.1500-8.9900-6.0200->>>u,s,v=>>>-0.59110.26320.35540.3143-0.39760.2438-0.2224-0.7535--0.0335-0.6003-0.45080.2334--0.42970.2362-0.68590.3319-0.4697-0.35090.38740.1587-

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論