Auto Topic: dimension
auto_dimension | topic
Coverage Score
1
Mentioned Chunks
30
Mentioned Docs
1
Required Dimensions
definitionpros_cons
Covered Dimensions
definitionpros_cons
Keywords
dimension
Relations
| Source | Type | Target | W |
|---|---|---|---|
| Auto Topic: convolution | CO_OCCURS | Auto Topic: dimension | 7 |
| Auto Topic: dimension | CO_OCCURS | Auto Topic: kernel | 6 |
| Auto Topic: dimension | CO_OCCURS | Auto Topic: pixels | 6 |
| Auto Topic: dimension | CO_OCCURS | Auto Topic: stride | 5 |
| Auto Topic: cnn | CO_OCCURS | Auto Topic: dimension | 5 |
| Auto Topic: dimension | CO_OCCURS | Propositional Logic | 4 |
| Auto Topic: cnns | CO_OCCURS | Auto Topic: dimension | 4 |
| Auto Topic: convolutional | CO_OCCURS | Auto Topic: dimension | 3 |
Evidence Chunks
| Source | Confidence | Mentions | Snippet |
|---|---|---|---|
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.67 | 7 | ion of aK-d tree balanced binary tree. We start with a set of examples and at the root node we split them along the ith dimension by testing whether xi ≤ m, where m is the median of the examples along Section 19.7 Nonparametric Models 707 the ith dimension; thus half the examples ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.63 | 5 | ... attan distance is used if they are dissimilar, such as age, weight, and gender of a patient. Note that if we use the raw numbers from each dimension then the total distance will be affected by a change in units in any dimension. That is, if we change the height dimension from met ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.59 | 3 | ... lysis has existed in statistics, begin- ning with the work on uniform convergence theory (Vapnik and Chervonenkis, 1971). The so-called VC dimension provides a measure roughly analogous to, but more general than, the VC dimension ln |H| measure obtained from PAC analysis. The VC ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.59 | 3 | dimensional array of hidden units, where the third dimension is of size d. It is im- portant to organize the hidden layer this way, so that all the kernel outputs from a particular image location stay associated with that location. Unlike the spatial dimensions of the image, howe ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.57 | 2 | ation to the measurements in each dimension. We can compute the meanNormalization µi and standard deviation σi of the values in each dimension, and rescale them so that x j,i becomes (x j,i −µi)/σi. A more complex metric known as the Mahalanobis distance takesMahalanobis distance ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.57 | 2 | hly analogous to, but more general than, the VC dimension ln |H| measure obtained from PAC analysis. The VC dimension can be applied to continuous function classes, to which standard PAC analysis does not apply. PAC-learning theory and VC theory were first connected by the “four G ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.57 | 2 | ... s is aGaussian filter. Recall that the zero-mean Gaussian function Gaussian filter with standard deviationσ is Gσ(x) = 1√ 2πσ e−x2/2σ2 in one dimension, or Gσ(x,y) = 1 2πσ2 e−(x2+y2)/2σ2 in two dimensions. Applying a Gaussian filter means replacing the intensity I(x0,y0) with the su ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... s essentially two-dimensional. When the robot has arms and legs that must also be controlled, the search space becomes many-dimensional—one dimension for each joint angle. Advanced techniques are required just to make the essentially continuous search space finite (see Chap- ter 2 ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... sion we didn’t have to worry about overfitting. But with multivariable linear regression in high-dimensional spaces it is possible that some dimension that is actually irrelevant appears by chance to be useful, resulting in overfitting. Thus, it is common to useregularization on mu ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... way closest to the minimum, just because the corners are pointy. And of course the corners are the points that have a value of zero in some dimension. In Figure 19.14(b), we’ve done the same for the L2 complexity measure, which repre- sents a circle rather than a |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | . And of course the corners are the points that have a value of zero in some dimension. In Figure 19.14(b), we’ve done the same for the L2 complexity measure, which repre- sents a circle rather than a diamond. Here you can see that, in general, there is no reason for the intersec ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... se are outliers; in general it will be hard to find a good value for them because we will be extrapolating rather than interpolating. In one dimension, these outliers are only 2% of the points on the unit line (those points wherex<. 01 or x>. 99), but in 200 dimensions, over 98% o ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... lar to the construction of aK-d tree balanced binary tree. We start with a set of examples and at the root node we split them along the ith dimension by testing whether xi ≤ m, where m is the median of the examples along |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... data are linearly separable in this space! This phenomenon is actually fairly general: if data are mapped into a space of sufficiently high dimension, then they will almost always |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... data are linearly separable in this space! This phenomenon is actually fairly general: if data are mapped into a space of sufficiently high dimension, then they will almost always be linearly separable—if you look at a set of points from enough directions, you’ll find a way to mak ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... even for innocuous-looking kernels. For example, the poly- nomial kernel, K(x j,xk) = (1 + x j · xk)d, corresponds to a feature space whose dimension is Polynomial kernel exponential in d. A common kernel is the Gaussian: K(x j,xk) =e−γ|x j−xk|2 . 19.7.6 The kernel trick This the ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... been projected down to two dimensions. But many data sets have dozens or even millions of dimensions. In order to visualize them we can do dimension- ality reduction, projecting the data down to a map in two dimensions (or sometimes to three dimensions, which can then be explore ... |
textbook Artificial-Intelligence-A-Modern-Approach-4th-Edition.pdf | 0.55 | 1 | ... Kernel organized units in a subsequent layer) is called convolution.4 Convolution Kernels and convolutions are easiest to illustrate in one dimension rather than two or more, so we will assume an input vector x of size n, corresponding to n pixels in a one- 3 Similar ideas can be ... |