Seminário será ministrado pelo professor Alexandre Falcão, da Universidade Estadual de Campinas
No dia 22 de setembro, quinta-feira, às 15h, o Centro de Informática (CIn) da UFPE recebe o seminário apresentado pelo professor Alexandre Falcão da Universidade Estadual de Campinas (Unicamp) com o tema “Building Convolutional Feature Extractors from Image Markers”. O evento será presencial, no Anfiteatro da Centro, com transmissão pelo YouTube do CIn.
Alexandre Falcão é graduado em Engenharia Elétrica pela Universidade Federal de Pernambuco (1988), possui mestrado (1993) e doutorado (1997) em Engenharia Elétrica pela Universidade Estadual de Campinas. Desde 1998 é Professor do Instituto de Computação, na Universidade Estadual de Campinas. Tem experiência na área de Ciência da Computação, com ênfase em Processamento e Análise de Imagens.
Resumo dos conteúdos que serão apresentados no seminário
The success of a Convolutional Neural Network (CNN) mainly depends on its feature extractor (encoder). E.g., an encoder that successfully maps samples from distinct classes into separated subspaces of its output feature space can reduce the classifier’s depth into a single decision layer. However, training deep models with backpropagation from scratch requires considerable human effort in data annotation and hyperparameter adjustments, leaving unanswered questions, such as: How many annotated samples are required to train the model? What is the simplest model to solve the problem? Can we build the model layer by layer and explain its decisions? We have addressed such questions by combining semi-automated data annotation, information (data) visualization, and model construction layer by layer. This talk focus on the latter topic by presenting FLIM — Feature Learning from Image Markers. FLIM builds encoders from strokes drawn by the user on relevant regions of very few training images per class. By that, we aim to considerably reduce human effort in data annotation with more user control and understanding of the model. FLIM estimates the kernels of each convolutional layer from patches centered at the marked pixels and their activations in the previous layer. The process relies on patch clustering rather than backpropagation to select the most representative patches to activate similar regions in new images. The resulting CNN usually outperforms the same architecture trained from scratch by backpropagation. The talk presents results compared to deep models for image classification and segmentation problems and discusses the open problems in FLIM to motivate collaborations.