Software Developer | Data Science MSc graduate
EU
Skills
Machine Learning
Python
Git
Languages
Italian
English
Spanish
Natural Language Processing tasks have been widely studied in the context of big corpora, but, when applied to social media content, they have been proven to be harder. This is mainly due to the following reasons:
The lack of a unified evaluation framework makes it hard to compare different models.
In this project I used BERTweet, a large-scale language model pretrained on English tweets, and I evaluated its performances on three different NLP tasks:
The colorization of greyscale images is an ill-posed problem that was approached in different ways in literature. This project provides a comparative analysis concerning five pre-trained colorization models and a cartoonization-based baseline of our invention. The performances are assessed through both quantitative and qualitative metrics, with a final evaluation of the results with respect to image filtering.
The main goal of the project is to analyze three different Stochastic Gradient Free Frank-Wolfe algorithms for producing Universal Adversarial Perturbations. These perturbations are designed to fool advanced Convolutional Neural Networks, such as LeNet-5 and AlexNet, on the classification task performed over the MNIST dataset.
Master's Degree
Bachelor's Degree