Leveraging GPU Power in Deep Learning
Within the context of the NVAITC (NVIDIA AI Technology Center) program (implemented in Italy as a partnership between NVIDIA, CINI and CINECA (https://www.consorzio-cini.it/index.php/en/labaiis-home/labaiis-nvaitc), and aimed at accelerating academic research in the field of artificial intelligence through collaboration projects), Giuseppe Fiameni (Solution Architect and Data Scientist at NVIDIA and responsible for the NVAITC program in Italy) will be our guest to hold the following seminar.
Deep Learning has been the most significant breakthrough in the past 10 years in the field of pattern recognition and machine learning. It has achieved significant advancements in terms of the effectiveness of prediction models on many research topics and application fields, ranging from computer vision, natural language processing, embodied AI and to more traditional fields of pattern recognition. As these models grow in complexity to solve increasingly challenging problems with larger and larger datasets, the need for scalable methods and software to train them grows accordingly. While research efforts have concentrated on the design of effective feature extraction and prediction architectures, computation has moved from CPU-only approaches to the dominant use of GPUs and massively parallel devices, empowered by large-scale and highly dimensional datasets. The goal of this talk is to provide attendees with a working knowledge of deep learning on HPC-class systems, including core concepts, scientific applications, performance optimization, tips, and techniques for scaling.
SPEAKER: GIUSEPPE FIAMENI is responsible for the NVAITC program in Italy. He has the role of Solution Architect and Data Scientist at NVIDIA and he has been working as HPC specialist at CINECA, the largest HPC facility in Italy, for more than 14 years providing support for large-scale data analytics workloads.
DATE: March 1st
NOTES: The seminar is mainly directed to PhD students and to researchers who have experience in training models through deep learning. At the end of the seminar, there will be room to discuss possible collaborations and joint collaborative projects through NVAITC (NVIDIA AI Technology Center). The EU head of the program, Frederic Pariente, will be present for the discussion as well, together with other NVIDIA researchers.
PLACE: Aula Magna "Alberto Bertoni", Via Celoria 18
HOST: Alberto Borghese