extract abstract concepts out of my mind.

About one month before imagenet deadline. Come on models.

Resize imagenet data (short edge to 256x256)
ls *.tar | xargs -P3 -I{} sh -c "tar -xvf {} -C ../original/" | xargs -P4 -I{} sh -c 'convert ../original/{} -colorspace rgb -resize 256x256^ ../resize/{}'

Crop the center 256x256

convert dragon.gif -resize 64x64^ -gravity center -extent 64x64 fill_crop_dragon.gif

So careless, I should’ve check the generated images. Adding colorspace argument is changing the looking of the image!!! I think the imagenet dataset contains some grayscale images, but it is handled by the sweet Caffe! Regenerating the image [====> ]

CCCP Pooling

Just come up with a very cool name for the node sharing mlp in network. Network in network describes the overall structure, the first network can be any network such as the RBF network.

When MLP is used to convolve the input, and feature maps shares nodes of the MLP, it is equivalent to a multilayer cross channel parametric pooling, this is not an ugly name though, but why not a cooler one?

Cascaded Cross Channel Parametric Pooling (CCCP Pooling)


Hella cool!

The implementation has been updated in my fork of Caffe.

Some more explanations of Network in Network

Traditional non-convolutional neural net is a stack of fully connected layers. The layers extracts different levels of concepts from the input. When the input has spatial information, for instance, the input is an image, then the spatial information is lost in this process.

While in convolutional neural network, the feature maps generated from the convolutional layers has the same layer out as the input image. Thus the spatial information is still there. Convolution scans the whole image with a square filter, that extracts local information from the underlying patches. It works just like a detector.

The CNN is a extension of non-convolutional deep networks, by replacing each fully connected layers with a convolutional layer. However, there is another extension possible, why not scan the input with a whole deep network? Say, we build a traditional non-convolutional deep net for classification of spatially aligned human face, then we can apply this deep net on all patches of the input image.

In conventional CNN, usually the first layer of convolution is a detection layer of edges, corners and other low level features. Then the second layer detects higher features like parts etc. For face classification, the second layer may learn features such as eyes, noses etc. Then the third layer may combine the eyes noses into a intact face. If we convolve the image with a aligned face classifier, there is no such cascade.

In fact, partitioning an object into parts is more advantageous. 


For instance, in this figure, object A consists of two parts A1 and A2. object B consists of B1 and B2. Our task is to separate A from B. Say for A1, A2, B1, B2 each of them has 5 variations. Then in the sample space there are 5x5+5x5=50 variations. To classify all the 50 variations into two classes, we need 50 templates to match each of them. But if each of the parts is classified individually, then we need 5+5+5+5+2=22 templates.

  • c = category number.
  • p = number of parts in each category.
  • v = variation number of each part.

If each category is modelled from the root level, then the template it needs is: $$c\times{}v^p$$

Else, if each parts are modelled and then combined on the root level, then it needs much smaller number of templates: $$c+(p\times{}v\times{}c)$$

The above indicates that parts should be classified first then root, else the model will suffer from combinatorial explosion, imagine that A and B are both parts of a bigger object, the combination number increases exponentially as it layers up.

This is also the reason why we should not generate a overcomplete number of feature maps. One thing should be kept in mind that the whole network is doing abstraction. If in one layer overcomplete features are learned, there should be another layer pay the price to shrink the representation. It is true that an overcomplete set of filters (thus an overcomplete set of feature maps) can better model the underlying image patch. The number should be reduced (abstracted) before fed into the next layer whose filter covers a larger spatial region. Else combinatorial explosion can happen.

Thus I think conventional CNN contains two functionalities:

  1. Partitioning
  2. Abstraction

Partitioning is already talked about in the previous paragraph, filters are firstly very small then increases in size. Rather than saying that layers in CNN are extracting more and more abstract features, I would say they are just extracting features that are spatially larger and larger.

My understanding of abstraction is the process of classifying all the variations of A1 to the category A1 but not B1 or A2. Thus A1 is an abstract concept. In conventional CNN, the abstraction of each local patch is done through a linear classifier and a non-linear activation function. Which is definitely not a strong abstraction. Weak abstraction resolves the combinatorial explosion to some extent, but not as potent as strong abstraction.

That is why network in network in proposed.



Estimation of diagonal Hessian.

Y. LeCun, L. Bottou, G. Orr and K. Muller: Efficient BackProp, in Orr, G. and Muller K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998

Estimation of diagonal Hessian.

Y. LeCun, L. Bottou, G. Orr and K. Muller: Efficient BackProp, in Orr, G. and Muller K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998

So annoying to always have gif animations on the right column on tumblr.