Auto Topic: convolution

auto_convolution | topic

Coverage Score
1
Mentioned Chunks
14
Mentioned Docs
1

Required Dimensions

definitionpros_cons

Covered Dimensions

definitionpros_cons

Keywords

convolution

Relations

SourceTypeTargetW
Auto Topic: convolutionCO_OCCURSAuto Topic: kernel8
Auto Topic: convolutionCO_OCCURSAuto Topic: dimension7
Auto Topic: convolutionCO_OCCURSAuto Topic: pixels7
Auto Topic: convolutionCO_OCCURSAuto Topic: convolutional6
Auto Topic: convolutionCO_OCCURSAuto Topic: stride6
Auto Topic: cnnCO_OCCURSAuto Topic: convolution4
Auto Topic: activationCO_OCCURSAuto Topic: convolution3
Auto Topic: cnnsCO_OCCURSAuto Topic: convolution3

Evidence Chunks

SourceConfidenceMentionsSnippet
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.614... and the process of applying the kernel to the pixels of the image (or to spatially Kernel organized units in a subsequent layer) is called convolution.4 Convolution Kernels and convolutions are easiest to illustrate in one dimension rather than two or more, so we will assume an ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.614hibit temporal invariance. 4 In the terminology of signal processing, we would call this operation a cross-correlation, not a convolution. But “convolution” is used within the field of neural networks. 812 Chapter 22 Deep Learning 5 6 6 2 5 6 5 5 9 4 +1 –1 +1 +1 –1 +1 +1 –1 +1 Fig ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.593olution of two functions f and g (denoted as h = f ∗ g) if we have Convolution h(x) = +∞ ∑ u=−∞ f (u)g(x − u) in one dimension, or h(x,y) = +∞ ∑ u=−∞ +∞ ∑ v=−∞ f (u,v)g(x − u,y − v) in two dimensions. 998 Chapter 27 Computer Vision So the smoothing function is achieved by convolv ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.572... x0,y0) to (x,y). This kind of weighted sum is so common that there is a special name and notation for it. We say that the function h is the convolution of two functions f and g (denoted as h = f ∗ g) if we have Convolution h(x) = +∞ ∑ u=−∞ f (u)g(x − u) in one dimension, or h(x,y ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.572... do well. You should think of a layer—a con- volution followed by a ReLU activation function—as a local pattern detector (Figure 27.12). The convolution measures how much each local window of the image looks like the kernel pattern; the ReLU sets low-scoring windows to zero, and e ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.572... 964 convention, 595 conversion to normal form, 317–318 convexity, 140 convex optimization, 140, 159 CONVINCE (Bayesian expert system), 472 convolution, 997 convolution (in neural networks), 811 convolutional neural network (CNN), 811, 1003 Conway, D., 738, 1091 Cook, P. J., 1051 ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... the strides in the x and y directions in the image.) We say “roughly” because of what happens at the edge of the image: in Figure 22.4 the convolution stops at the edges of the image, but one can also pad the input with extra pixels (either zeroes or copies of the outer pixels) ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... it can also be formulated as a single matrix operation, just like the application of the weight matrix in Equation (22.1). For example, the convolution illustrated in Figure 22.4 can be viewed as the following matrix multiplication: ( +1 −1 +1 0 0 0 0 0 0 +1 −1 +1 0 0 0 0 0 0 +1 ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... he second hidden layer. Generally speaking, the deeper the unit, the larger the receptive field. mostly zeroes, after all—but the fact that convolution is a linear matrix operation serves as a reminder that gradient descent can be applied easily and effectively to CNNs, just as it ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... ayer in a neural network summarizes a set of adjacent units from the preceding Pooling layer with a single value. Pooling works just like a convolution layer, with a kernel sizel and stride s, but the operation that is applied is fixed rather than learned. Typically, no activation ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... ith c output units. The early layers of the CNN are image-sized, so somewhere in between there must be significant reductions in layer size. Convolution layers and pooling layers with stride larger than 1 all serve to reduce the layer size. It’s also possible to reduce the layer s ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... keeping track of the “shape” of the data as it progresses through the layers of the network. This is important because the whole notion of convolution depends on the idea of adjacency: adjacent data elements are assumed to be semantically related, so it makes sense to apply oper ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... lex cells” that are invariant to some transformations such as small spatial translations. In modern convolutional networks, the output of a convolution is analogous to a simple cell while the output of a pooling layer is analogous to a complex cell. The work of Hubel and Wiesel i ...
textbook
Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf
0.551... ond layer receives inputs from first-layer values in a window about that location. This Section 27.4 Classifying Images 1005 Digits Kernels Convolution output Test against threshold Figure 27.12 On the far left, some images from the MNIST data set. Three kernels appear on the cent ...