Róbert Belanec
PhD at Kempelen Institute of Intelligent Technologies
The trustworthiness of the generative AI models is an important topic, especially with the increase in popularity of generative large language models. In recent years, the transformer architecture has become popular in the field of natural language processing. However, the increase in parameters is reducing the trustworthiness of the models, because of the incapability to interpret and explain their behavior. Consequently, models often require not only a vast amount of computational resources for training but also huge amounts of training data. To address these problems, parameter-efficient fine-tuning methods have emerged. These methods aim to leverage the power of large pre-trained models and try to adapt them to specific tasks or domains with only a minimal number of trained parameters.
Keywords: large language models, parameter-efficient fine-tuning, generative AI, trustworthiness
Scientific area: Artificial Intelligence – Machine Learning with Limited Labelled Data
Visiting period: 3 months at Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI)