recognition

阅读 / 问答 / 标签

eigenfaces for recognition有中文版吗

function pca (path, trainList, subDim) % % PROTOTYPE % function pca (path, trainList, subDim) % % USAGE EXAMPLE(S) % pca ("C:/FERET_Normalised/", trainList500Imgs, 200); % % GENERAL DESCRIPTION % Implements the standard Turk-Pentland Eigenfaces method. As a final % result, this function saves pcaProj matrix to the disk with all images % projected onto the subDim-dimensional subspace found by PCA. % % REFERENCES % M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive % Neurosicence, Vol. 3, No. 1, 1991, pp. 71-86 % % M.A. Turk, A.P. Pentland, Face Recognition Using Eigenfaces, Proceedings % of the IEEE Conference on Computer Vision and Pattern Recognition, % 3-6 June 1991, Maui, Hawaii, USA, pp. 586-591 % % % INPUTS: % path - full path to the normalised images from FERET database % trainList - list of images to be used for training. names should be % without extension and .pgm will be added automatically % subDim - Numer of dimensions to be retained (the desired subspace % dimensionality). if this argument is ommited, maximum % non-zero dimensions will be retained, i.e. (number of training images) - 1 % % OUTPUTS: % Function will generate and save to the disk the following outputs: % DATA - matrix where each column is one image reshaped into a vector % - this matrix size is (number of pixels) x (number of images), uint8 % imSpace - same as DATA but only images in the training set % psi - mean face (of training images) % zeroMeanSpace - mean face subtracted from each row in imSpace % pcaEigVals - eigenvalues % w - lower dimensional PCA subspace % pcaProj - all images projected onto a subDim-dimensional space % % NOTES / COMMENTS % * The following files must either be in the same path as this function % or somewhere in Matlab"s path: % 1. listAll.mat - containing the list of all 3816 FERET images % % ** Each dimension of the resulting subspace is normalised to unit length % % *** Developed using Matlab 7 % % % REVISION HISTORY % - % % RELATED FUNCTIONS (SEE ALSO) % createDistMat, feret % % ABOUT % Created: 03 Sep 2005 % Last Update: - % Revision: 1.0 % % AUTHOR: Kresimir Delac % mailto: kdelac@ieee.org % URL: % % WHEN PUBLISHING A PAPER AS A RESULT OF RESEARCH CONDUCTED BY USING THIS CODE % OR ANY PART OF IT, MAKE A REFERENCE TO THE FOLLOWING PAPER: % Delac K., Grgic M., Grgic S., Independent Comparative Study of PCA, ICA, and LDA % on the FERET Data Set, International Journal of Imaging Systems and Technology, % Vol. 15, Issue 5, 2006, pp. 252-260 % % If subDim is not given, n - 1 dimensions are % retained, where n is the number of training images if nargin < 3 subDim = dim - 1; end; disp(" ") load listAll; % Constants numIm = 3816; % Memory allocation for DATA matrix fprintf("Creating DATA matrix ") tmp = imread ( [path char(listAll(1)) ".pgm"] ); [m, n] = size (tmp); % image size - used later also!!! DATA = uint8 (zeros(m*n, numIm)); % Memory allocated clear str tmp; % Creating DATA matrix for i = 1 : numIm im = imread ( [path char(listAll(i)) ".pgm"] ); DATA(:, i) = reshape (im, m*n, 1); end; save DATA DATA; clear im; % Creating training images space fprintf("Creating training images space ") dim = length (trainList); imSpace = zeros (m*n, dim); for i = 1 : dim index = strmatch (trainList(i), listAll); imSpace(:, i) = DATA(:, index); end; save imSpace imSpace; clear DATA; % Calculating mean face from training images fprintf("Zero mean ") psi = mean(double(imSpace"))"; save psi psi; % Zero mean zeroMeanSpace = zeros(size(imSpace)); for i = 1 : dim zeroMeanSpace(:, i) = double(imSpace(:, i)) - psi; end; save zeroMeanSpace zeroMeanSpace; clear imSpace; % PCA fprintf("PCA ") L = zeroMeanSpace" * zeroMeanSpace; % Turk-Pentland trick (part 1) [eigVecs, eigVals] = eig(L); diagonal = diag(eigVals); [diagonal, index] = sort(diagonal); index = flipud(index); pcaEigVals = zeros(size(eigVals)); for i = 1 : size(eigVals, 1) pcaEigVals(i, i) = eigVals(index(i), index(i)); pcaEigVecs(:, i) = eigVecs(:, index(i)); end; pcaEigVals = diag(pcaEigVals); pcaEigVals = pcaEigVals / (dim-1); pcaEigVals = pcaEigVals(1 : subDim); % Retaining only the largest subDim ones pcaEigVecs = zeroMeanSpace * pcaEigVecs; % Turk-Pentland trick (part 2) save pcaEigVals pcaEigVals; % Normalisation to unit length fprintf("Normalising ") for i = 1 : dim pcaEigVecs(:, i) = pcaEigVecs(:, i) / norm(pcaEigVecs(:, i)); end; % Dimensionality reduction. fprintf("Creating lower dimensional subspace ") w = pcaEigVecs(:, 1:subDim); save w w; clear w; % Subtract mean face from all images load DATA; load psi; zeroMeanDATA = zeros(size(DATA)); for i = 1 : size(DATA, 2) zeroMeanDATA(:, i) = double(DATA(:, i)) - psi; end; clear psi; clear DATA; % Project all images onto a new lower dimensional subspace (w) fprintf("Projecting all images onto a new lower dimensional subspace ") load w; pcaProj = w" * zeroMeanDATA; clear w; clear zeroMeanDATA; save pcaProj pcaProj

PRML这本神作有中文版吗?Pattern recognition & machine learning

没有,连影印版都没有,想看纸质书只能买外文进口书。或者看电子版

sense of recognition是什么意思?

脚后跟就会更好高交会馆几个

mutual recognition是什么意思?

相互承认

请帮忙error recognition?

去查english interpreting 就有了

instant recognition 什么意思啊?

瞬间识别/瞬间辨认/即刻辨认

mutual recognition是什么意思

互相之间的认知

need recognition和need recognizing

可以填recognition,而且填名词更好. ...needs recognition for all her hard work need 是三单式 needs 请在客户端右上角评价点“满意”即可,

to get recognition

语法------句子分析: 1 it (形式主语)is(系动词谓语) a good opportunity (表语)to get recognition for my invention (句子的真正主语) 2 i (主语) want (谓语) to take the opportunity(宾语) to get recognition for my invention(後面全部是opportunity的定语)

recognition用法

In recognition是跟介词of一起使用的 所有词典给的关于recognition的例句都是in recognition of .. 但是in recognition to ... 也能在古歌上面找到几个million的search results 当然in recognition of是几百个million的results. 可能in recognition to是个经常错的语法吧

recognition和recognization的区别?

recognization 识别的意思。

recognition和recognization的区别

作 认出 承认 确认 时用recognition作 识别 时用recognization 这个词比较正式 一般用在写文章时候用吧

recognition和cognition的区别

recongnition有主观辨别,辨识辨认的意思congnition主要是认知,认识的意思recongnition同义可以用identification,但是congnition不可以

employee recognition是什么意思

employee recognition员工识别

revenue recognition是什么意思

revenue recognition 收入确认;营业收入的确认 [例句]Revenue recognition issue relating to a funeral parlour.有关葬礼间收入确认的问题.

iris recognition优缺点

iris recognition就是虹膜技术优点:1、所有生物识别都具有的优点,身体本身的功能器官,不会像密码一样有忘记的属性;2、和面部识别一样,非接触性,使用者不需要和设备直接接触就获取了图像,干净卫生,避免了疾病的可能的接触传染;3、和指纹及面部容易修改和磨损不同,虹膜在眼睛内部,基本不可能被复制修改。缺点:1、硬件设备小型化不容易,智能手机已经是非常小的设备;2、相较于其它生物识别硬件,虹膜识别硬件造价较高,大范围推广困难;3、使用便捷性较差,识别准度略低,反应速度较慢表2为指纹识别与虹膜识别的比较

handwriting recognition是什么意思

handwriting recognition手写识别

recognition发音g是否省略?

不省略,gn中的g只有在词首或词尾时可以省略不发音。在词中间,g发不完全爆破音,即形成阻塞,发生不完全爆破,如:土地神,英文叫做gnome。咬、啮,英文是gnaw。标记,英文读sign。羚牛,英文读gnu。咬牙切齿,读gnash。设计,是design。希望我能帮助你解疑释惑。

有关recognition用法的问题

In recognition是跟介词of一起使用的所有词典给的关于recognition的例句都是in recognition of ..但是in recognition to ... 也能在古歌上面找到几个million的search results当然in recognition of是几百个million的results.可能in recognition to是个经常错的语法吧

ackownledge和recognition

acknowledge是动词,常用作“承认”的意思,recognition是名词,有“识别、认可”之意。recognition的动词形式是recognize,和acknowledge是近义词,但recognize常用于表示认出某人,或者意识到某件事。例句:I recognized him as soon as he came in the room.(他一进房间我就认出他了)。They recognized the need to take the problem seriously.(他们意识到需要严肃对待那个问题了)。

recognition形容词

您好,除了以下两位网友提供的 recognized 和 recognizing 两个形容词以外,recognize(动词)还有3 个形容词:1)recognized(形容词:公认的)【楼下网友已经提供了】/recognised(英式英语拼法)2)recognizing(形容词:识别)【楼下网友已经提供了】/recognising(英式英语拼法)3)recognizable(形容词: 可认出的;可认识的)/ recognisable(英式英语拼法)4)recognizant(形容词:认识到的,意识到的;承认的)/recognisant(英式英语拼法)5)recognitory(形容词:承认的,认识的【这个现在比较少用了】动词:recognize(美式英语拼法)/recognise(英式英语拼法)

recognition 可数吗

recognition只有在认识的时候是可数,其他都是不可数。认识( recognition的名词复数 )His quick recognitions made him frantically impatient of deliberate judgement. 他敏捷的辩别力使他急躁得毫无耐心作深思熟虑的判断.N-UNCOUNT 认识;识别;认出N-UNCOUNT 承认;接受;理解N-UNCOUNT (政府对他国的)外交认可,正式承认N-UNCOUNT 赞赏;好评;认可

perception和recognition是同义词吗

不同义:perception n. 知觉; 观念; 觉察(力); (农作物的) 收获; [例句]He is interested in how our perceptions of death affect the way we live.他感兴趣的是我们对死亡的看法如何影响我们的生活。recognition n. 认识,识别; 承认,认可; 褒奖; 酬劳;

recognize的名词形式是recognization还是recognition,两者的区别是什么?为何老师讲的不一致?

就是recognition

recognition和memory的区别

recognition和memory的区别:recognition是针对记忆回忆的能力而言的。memory是指记忆和回想起来的事物而言的。 、

recognition和acceptance区别?

1.recognition英 [?rek?g?n??n],美 [?r?k?ɡ?n???n];n.认识,识别; 承认,认可; 褒奖; 酬劳例句:His government did not receive full?recognition?by Britain until July.?他的政府直到7月份才得到英国的正式承认。2.recognization ?英 [rek?ɡna?"ze??n] ?,美 [rek?ɡna?"ze??n] ;【医】识别,是recognize的名词形式例句:The judgement criterion of the capability is the ability for?recognization.?自然人民事责任能力的判断标准是识别能力,是意思归责的当然结果.

recognition是什么意思

recognition[英][u02ccreku0259gu02c8nu026au0283n][美][u02ccru025bku0259ɡu02c8nu026au0283u0259n]n.认识,识别; 承认,认可; 褒奖; 酬劳; 复数:recognitions以上结果来自金山词霸例句:1.Facial recognition programs are used in police and security operations. 面部识别计划将用于警方和安全任务之中。

recognition是什么意思

新一代

recognition是什么意思

CET4考 研TOEFLCET6recognition英 [u02ccreku0259gu02c8nu026au0283n] 美 [u02ccru025bku0259ɡu02c8nu026au0283u0259n]n.认识,识别; 承认,认可; 褒奖; 酬劳复数: recognitions派生词:recognitory 双语例句1. But by the year 2020 business computing will have changed beyond recognition. 但是到了2020年,商业计算会变得面目全非。来自柯林斯例句2. His government did not receive full recognition by Britain until July. 他的政府直到7月份才得到英国的正式承认。来自柯林斯例句3. The situation in Eastern Europe has changed out of all recognition. 东欧的形势经历了巨变。来自柯林斯例句4. South Africa gave diplomatic recognition to Rwanda"s new government on September 15. 南非于9月15日正式承认了卢旺达的新政府。来自柯林斯例句5. This lack of recognition was at the root of the dispute. 这种不被承认就是这次纷争的根源。来自柯林斯例句查看更多例句>>柯林斯高阶英汉双解学习词典英汉双向大词典1. N-UNCOUNT 认识;识别;认出 Recognition is the act of recognizing someone or identifying something when you see it. George said, "Ida, how are you?" She frowned for a moment and then recognition dawned. "George Black. Well, I never."... 乔治说:“艾达,你好吗?”她皱了一会儿眉头,然后才认出他。“乔治·布莱克,噢,我一直都不好。”He searched for a sign of recognition on her face, but there was none. 他试图在她的脸上找出一丝认出他的神情,但是根本没有。2. N-UNCOUNT 承认;接受;理解 Recognition of something is an understanding and acceptance of it. The CBI welcomed the Chancellor"s recognition of the recession and hoped for a reduction in interest rates. 英国工业联合会对财政大臣承认经济出现衰退表示欢迎,并希望能出台降低利率的政策。3. N-UNCOUNT (政府对他国的)外交认可,正式承认 When a government gives diplomatic recognition to another country, they officially accept that its status is valid. South Africa gave diplomatic recognition to Rwanda"s new government on September 15... 南非于9月15日正式承认了卢旺达的新政府。His government did not receive full recognition by Britain until July. 他的政府直到7月份才得到英国的正式承认。4. N-UNCOUNT 赞赏;好评;认可 When a person receives recognition for the things that they have done, people acknowledge the value or skill of their work. At last, her father"s work has received popular recognition... 最后,她父亲的工作得到了大众的认可。He is an outstanding goalscorer who doesn"t get the recognition he deserves. 他是一个出色的射手,但并没有获得应有的认可。5. PHRASE 面目全非;认不出来;无法辨认 If you say that someone or something has changed beyond recognition or out of all recognition, you mean that person or thing has changed so much that you can no longer recognize them. The bodies were mutilated beyond recognition... 尸体都残缺不全,无法辨认了。The facilities have improved beyond all recognition... 这些设备经过大幅度的改良,都让人认不出来了。The situation in Eastern Europe has changed out of all recognition. 东欧的形势经历了巨变。6. PREP-PHRASE 获官方认可;正式承认 If something is done in recognition of someone"s achievements, it is done as a way of showing official appreciation of them. Brazil normalised its diplomatic relations with South Africa in recognition of the steps taken to end apartheid... 巴西恢复了与南非的外交关系,以示对其采取措施结束种族隔离的正式认可。He had just received a doctorate in recognition of his contributions to seismology. 他刚被授予了博士头衔以表彰他对地震学作出的贡献。

recognition和recognization的区别

1、表达意思不同recognition:承认,接受;表彰,赞誉;认出,识别;(政府对他国的)外交认可。recognization:认可;识别。2、用法不同recognition:表达意思“作、认出”时用recognition。He glanced briefly towards her but there was no sign of recognition.他瞥了她一眼,但似乎没认出她来。recognization:用在书面上作识别时用recognization。This method has some value in term with other pattern recognization and decision-makingproblems.这种方法对于研究和解决其它模式识别,决策问题具有较高的借鉴意义。3、侧重点不同recognition:recongnition有主观辨别,辨识辨认的意思。recognization:recognization无具体主、客观区别。

linked recognition是什么意思

linked recognition 英[liu014bkt u02ccreku0259ɡu02c8niu0283u0259n] 美[lu026au014bkt u02ccru025bku0259ɡu02c8nu026au0283u0259n] .[医]连锁识别

Very Deep Convolutional Networks for Large-Scale Image Recognition翻译[上]

Very Deep Convolutional Networks for Large-Scale Image Recognition翻译 下 code Very Deep Convolutional Networks for Large-Scale Image Recognition 用于大规模图像识别的非常深的卷积网络 论文: http://arxiv.org/pdf/1409.1556v6.pdf ABSTRACT 摘要 ) convolution ufb01lters, which shows that a signiufb01cant improvement on the prior-art conufb01gurations can be achieved by pushing the depth to 16–19 weight layers. These ufb01ndings were the basis of our ImageNet Challenge 2014 submission, where our team secured the ufb01rst and the second places in the localisation and classiufb01cation tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. )卷积滤波器的体系结构对深度网络进行深入评估,这表明通过将深度推到16-19个重量层可以实现对现有技术配置的显着改进。这些发现是我们ImageNet Challenge 2014提交的基础,我们的团队分别获得了本地化和分类轨道的第一和第二名。我们还表明,我们的表示很好地适用于其他数据集,他们在那里获得最新的结果。我们已经公开发布了两款性能最佳的ConvNet模型,以便于进一步研究在计算机视觉中使用深度视觉表示。 1 INTRODUCTION 1引言 Convolutional networks (ConvNets) have recently enjoyed a great success in large-scale image and video recognition (Krizhevsky et al., 2012; Zeiler & Fergus, 2013; Sermanet et al., 2014; Simonyan & Zisserman, 2014) which has become possible due to the large public image repositories, such as ImageNet (Deng et al., 2009), and high-performance computing systems, such as GPUs or large-scale distributed clusters (Dean et al., 2012). In particular, an important role in the advance of deep visual recognition architectures has been played by the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) (Russakovsky et al., 2014), which has served as a testbed for a few generations of large-scale image classiufb01cation systems, from high-dimensional shallow feature encodings (Perronnin et al., 2010) (the winner of ILSVRC-2011) to deep ConvNets (Krizhevsky et al., 2012) (the winner of ILSVRC-2012). 卷积网络(ConvNets)最近在大规模图像和视频识别(Krizhevsky等,2012; Zeiler&Fergus,2013; Sermanet等,2014; Simonyan&Zisserman,2014)方面取得了巨大的成功,这已经成为可能由于大型公共图像库(如ImageNet(Deng等,2009))和高性能计算系统(如GPU或大规模分布式群集)(Dean等,2012)。特别是,ImageNet大规模视觉识别挑战(ILSVRC)(Russakovsky et al。,2014)对深度视觉识别架构的发展起到了重要作用,它已经成为几代大型(Perronnin et al。,2010)(ILSVRC-2011的获胜者)到深层ConvNets(Krizhevsky等,2012)(ILSVRC-2012的获胜者)的高分辨率图像分类系统。 ) convolution ufb01lters in all layers. )卷积滤波器,这是可行的。 As a result, we come up with signiufb01cantly more accurate ConvNet architectures, which not only achieve the state-of-the-art accuracy on ILSVRC classiufb01cation and localisation tasks, but are also applicable to other image recognition datasets, where they achieve excellent performance even when used as a part of a relatively simple pipelines (e.g. deep features classiufb01ed by a linear SVM without ufb01ne-tuning). We have released our two best-performing models1 to facilitate further research. 因此,我们提出了更加精确的ConvNet架构,它不仅实现了ILSVRC分类和本地化任务的最新准确度,而且还适用于其他图像识别数据集,甚至可以实现卓越的性能当用作相对简单的管道的一部分时(例如,不需要微调的线性SVM对深度特征进行分类)。我们发布了两款性能最好的模型1,以便于进一步研究。 The rest of the paper is organised as follows. In Sect. 2, we describe our ConvNet conufb01gurations. The details of the image classiufb01cation training and evaluation are then presented in Sect. 3, and the u2217current afufb01liation: Google DeepMind +current afufb01liation: University of Oxford and Google DeepMind 1 http://www.robots.ox.ac.uk/ u02dcvgg/research/very_deep/ conufb01gurations are compared on the ILSVRC classiufb01cation task in Sect. 4. Sect. 5 concludes the paper. For completeness, we also describe and assess our ILSVRC-2014 object localisation system in Appendix A, and discuss the generalisation of very deep features to other datasets in Appendix B. Finally, Appendix C contains the list of major paper revisions. 本文的其余部分安排如下。在Sect。 2,我们描述了我们的ConvNet配置。图像分类培训和评估的细节将在第二部分中介绍。 3和*当前补充:Google DeepMind +当前补充:牛津大学和Google DeepMind 1http: //www.robots.ox.ac.uk/~vgg/research/very_deep/ 配置在ILSVRC分类任务中进行比较教派。 4. Sect。 5结束了论文。为了完整起见,我们还在附录A中描述和评估了ILSVRC-2014对象定位系统,并讨论了附录B中对其他数据集的深入特征的概括。最后,附录C包含主要论文修订版的列表。 2 CONVNET CONFIGURATIONS 2 CONVNET配置 To measure the improvement brought by the increased ConvNet depth in a fair setting, all our ConvNet layer conufb01gurations are designed using the same principles, inspired by Ciresan et al. (2011); Krizhevsky et al. (2012). In this section, we ufb01rst describe a generic layout of our ConvNet conufb01gurations (Sect. 2.1) and then detail the speciufb01c conufb01gurations used in the evaluation (Sect. 2.2). Our design choices are then discussed and compared to the prior art in Sect. 2.3. 为了衡量公平环境下ConvNet深度增加所带来的改进,我们所有的ConvNet层配置都采用了Ciresan等人的相同原则设计。 (2011); Krizhevsky等人。 (2012年)。在本节中,我们首先描述ConvNet配置的一般布局(第2.1节),然后详细介绍评估中使用的特定配置(第2.2节)。然后讨论我们的设计选择,并与Sect中的现有技术进行比较。 2.3。 2.1 ARCHITECTURE 2.1体系结构 A stack of convolutional layers (which has a different depth in different architectures) is followed by three Fully-Connected (FC) layers: the ufb01rst two have 4096 channels each, the third performs 1000way ILSVRC classiufb01cation and thus contains 1000 channels (one for each class). The ufb01nal layer is the soft-max layer. The conufb01guration of the fully connected layers is the same in all networks. 一堆卷积层(在不同的体系结构中具有不同的深度)之后是三个全连接(FC)层:前两个层各有4096个通道,第三层执行1000way ILSVRC分类,因此包含1000个通道(每个类)。最后一层是软 - 最大层。全连接层的配置在所有网络中都是相同的。 All hidden layers are equipped with the rectiufb01cation (ReLU (Krizhevsky et al., 2012)) non-linearity. We note that none of our networks (except for one) contain Local Response Normalisation (LRN) normalisation (Krizhevsky et al., 2012): as will be shown in Sect. 4, such normalisation does not improve the performance on the ILSVRC dataset, but leads to increased memory consumption and computation time. Where applicable, the parameters for the LRN layer are those of (Krizhevsky et al., 2012). 所有隐藏层都配备了整合(ReLU(Krizhevsky et al。,2012))非线性。我们注意到我们的网络(除了一个网络)都没有包含本地响应规范化(LRN)规范化(Krizhevsky et al。,2012)。如图4所示,这种归一化不会提高ILSVRC数据集的性能,但会导致内存消耗和计算时间增加。在适用的情况下,LRN层的参数是(Krizhevsky et al。,2012)的参数。 2.2 CONFIGURATIONS 2.2配置 The ConvNet conufb01gurations, evaluated in this paper, are outlined in Table 1, one per column. In the following we will refer to the nets by their names (A–E). All conufb01gurations follow the generic design presented in Sect. 2.1, and differ only in the depth: from 11 weight layers in the network A (8 conv. and 3 FC layers) to 19 weight layers in the network E (16 conv. and 3 FC layers). The width of conv. layers (the number of channels) is rather small, starting from 64 in the ufb01rst layer and then increasing by a factor of 2 after each max-pooling layer, until it reaches 512. 本文中评估的ConvNet配置在表1中列出,每列一列。下面我们将以他们的名字(A-E)来提及网。所有的配置都遵循Sect中的通用设计。 2.1,并且仅在深度上有所不同:从网络A中的11个权重层(8个转发层和3个FC层)到网络E中的19个权重层(16个转发层和3个FC层)。conv的宽度。层数(通道数量)相当小,从第一层64层开始,然后在每个最大池层后增加2倍,直到达到512。 In Table 2 we report the number of parameters for each conufb01guration. In spite of a large depth, the number of weights in our nets is not greater than the number of weights in a more shallow net with larger conv. layer widths and receptive ufb01elds (144M weights in (Sermanet et al., 2014)). 在表2中,我们报告了每个配置的参数数量。尽管深度很大,但我们的网中的重量数量不会超过更大的转化次数的更浅网中的重量数量。图层宽度和接受域(Sermanet et al。,2014)中的144M权重)。 2.3 DISCUSSION 2.3讨论 3 CLASSIFICATION FRAMEWORK 3分类框架 In the previous section we presented the details of our network conufb01gurations. In this section, we describe the details of classiufb01cation ConvNet training and evaluation. 在上一节中,我们介绍了我们网络配置的细节。在本节中,我们将描述分类ConvNet培训和评估的细节。 3.1 TRAINING 3.1培训