Implemented the few-shot learning assigments of the CS330 course offered by Stanford University, which include MANN, MAML and ProtoNet. Worked with Python and Tensorflow.
GMVAE for clustering
In this project, I implemented a Gaussian Mixture Variational Autoencoder by representing the Categorical latent variable with the Gumbel-Softmax distribution avoiding the problem of multiple gradient estimators used in marginalization. Experiments showed around ~80% of clustering accuracy with multilayer perceptrons. Worked with PyTorch and Tensorflow.
CS231n: Convolutional Neural Networks for Visual Recognition
Implemented the assignments given by the CS231n course offered by Stanford University, which covers different machine learning topics including image classifiers (kNN, SVM, Softmax), CNNs, RNNs, LSTMs and GANs. Worked with python and Tensorflow.
In this work, I present a study of transfer learning applied to trademark image retrieval. Initially selective search is used to obtain region proposals, the image regions obtained are forwarded through a pretrained CNN architecture (AlexNet, GoogleNet and ResNet) on the ImageNet dataset. Feature representations are improved by developing feature aggregation methods (avg-pool, max-pool, R-MAC, etc) over intermediate layers. Finally re-ranking based on graph query specific fusion algorithm was applied to improve the results. Experiments demostrate that intermediate layers produce better results for image retrieval. It was possible to increase in ~15% the baseline (transfer learning of last layers) mean average precision (mAP). Worked with python and Caffe.
Project based on a competition offered by Kaggle. I built models based on feature selection, PCA and ensemble of classifiers by combining Random Forests and Gradient Boosting, increasing in ~5% the baseline accuracy. Worked with python.
University project based on a competition offered by Kaggle. Feature engineering was applied to the initial data, most of the data was based on categorical/text information, tf-Idf was applied for text data and one-hot encoding for categorical data. The classifiers employed in this activity were Logistic Regression and Neural Networks. Visualizations and plots were employed for better understanding of the data. *Rankings shown in the report are outdated. Worked with python and R.
In this work, I developed a parallel version of Gauss filter to reduce image noise and Sobel filter to detect corners. Two versions were implemented: a naive approach were only global memory was used and an improvement based on shared memory. Experiments showed an improvement of ~55x speedup over the serial version. Images resolutions of 720p, 4k, 8k and 16622x4740 were considered in the experiments. Worked with C++ and CUDA.
Jhosimar Arias, Darwin Saire, Juan Hernández, Ricardo Nishihara, Marcos Piaia.
The project consisted of three phases: Detection, Segmentation and Character Separation. I implemented the HOG descriptor to extract features of plate candidates in the detection phase and character separation based on projection of pixels. Worked with C.