Zobrazit minimální záznam

Porozumění mezijazykovým vlastnostem ve velkých vícejazyčných jazykových modelech
dc.contributor.advisorLibovický, Jindřich
dc.creatorDel Valle Girón, José Jacobo
dc.date.accessioned2023-11-06T22:36:13Z
dc.date.available2023-11-06T22:36:13Z
dc.date.issued2023
dc.identifier.urihttp://hdl.handle.net/20.500.11956/184175
dc.description.abstractCross-lingual abilities have been evident in large multilingual language models over the past few years. However, understanding why and under what circumstances they work is not entirely clear. In this work, we work towards a better understanding of these aspects in a specific subset of multilingual models, namely modular multilingual models with cross-lingual transfer learning abilities. We try to quantify claims in Pfeiffer et al. [2022] regarding their proposed model, X-MOD, as it was tested in a very specific setting which may not align with common low-resource settings. Specifically, we evaluate how the following factors may affect downstream performance: the amount of available pre- training data; hyperparameters such as number of training steps, checkpoint selection criteria, available overlapping lexicon. With the help of our findings, we also aim to provide guidelines on how to best use X-MOD, especially from a low-resource perspective. 1en_US
dc.languageEnglishcs_CZ
dc.language.isoen_US
dc.publisherUniverzita Karlova, Matematicko-fyzikální fakultacs_CZ
dc.subjecttransfer learning|cross-lingual learning|low-resource|language modelscs_CZ
dc.subjecttransfer learning|cross-lingual learning|low-resource|language modelsen_US
dc.titleUnderstanding cross-lingual abilities in large multilingual language modelsen_US
dc.typediplomová prácecs_CZ
dcterms.created2023
dcterms.dateAccepted2023-09-06
dc.description.departmentÚstav formální a aplikované lingvistikycs_CZ
dc.description.departmentInstitute of Formal and Applied Linguisticsen_US
dc.description.facultyMatematicko-fyzikální fakultacs_CZ
dc.description.facultyFaculty of Mathematics and Physicsen_US
dc.identifier.repId257456
dc.title.translatedPorozumění mezijazykovým vlastnostem ve velkých vícejazyčných jazykových modelechcs_CZ
dc.contributor.refereeLimisiewicz, Tomasz
thesis.degree.nameMgr.
thesis.degree.levelnavazující magisterskécs_CZ
thesis.degree.disciplineComputer Science - Language Technologies and Computational Linguisticscs_CZ
thesis.degree.disciplineComputer Science - Language Technologies and Computational Linguisticsen_US
thesis.degree.programComputer Science - Language Technologies and Computational Linguisticscs_CZ
thesis.degree.programComputer Science - Language Technologies and Computational Linguisticsen_US
uk.thesis.typediplomová prácecs_CZ
uk.taxonomy.organization-csMatematicko-fyzikální fakulta::Ústav formální a aplikované lingvistikycs_CZ
uk.taxonomy.organization-enFaculty of Mathematics and Physics::Institute of Formal and Applied Linguisticsen_US
uk.faculty-name.csMatematicko-fyzikální fakultacs_CZ
uk.faculty-name.enFaculty of Mathematics and Physicsen_US
uk.faculty-abbr.csMFFcs_CZ
uk.degree-discipline.csComputer Science - Language Technologies and Computational Linguisticscs_CZ
uk.degree-discipline.enComputer Science - Language Technologies and Computational Linguisticsen_US
uk.degree-program.csComputer Science - Language Technologies and Computational Linguisticscs_CZ
uk.degree-program.enComputer Science - Language Technologies and Computational Linguisticsen_US
thesis.grade.csVýborněcs_CZ
thesis.grade.enExcellenten_US
uk.abstract.enCross-lingual abilities have been evident in large multilingual language models over the past few years. However, understanding why and under what circumstances they work is not entirely clear. In this work, we work towards a better understanding of these aspects in a specific subset of multilingual models, namely modular multilingual models with cross-lingual transfer learning abilities. We try to quantify claims in Pfeiffer et al. [2022] regarding their proposed model, X-MOD, as it was tested in a very specific setting which may not align with common low-resource settings. Specifically, we evaluate how the following factors may affect downstream performance: the amount of available pre- training data; hyperparameters such as number of training steps, checkpoint selection criteria, available overlapping lexicon. With the help of our findings, we also aim to provide guidelines on how to best use X-MOD, especially from a low-resource perspective. 1en_US
uk.file-availabilityV
uk.grantorUniverzita Karlova, Matematicko-fyzikální fakulta, Ústav formální a aplikované lingvistikycs_CZ
thesis.grade.code1
uk.publication-placePrahacs_CZ
uk.thesis.defenceStatusO


Soubory tohoto záznamu

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

Tento záznam se objevuje v následujících sbírkách

Zobrazit minimální záznam


© 2017 Univerzita Karlova, Ústřední knihovna, Ovocný trh 560/5, 116 36 Praha 1; email: admin-repozitar [at] cuni.cz

Za dodržení všech ustanovení autorského zákona jsou zodpovědné jednotlivé složky Univerzity Karlovy. / Each constituent part of Charles University is responsible for adherence to all provisions of the copyright law.

Upozornění / Notice: Získané informace nemohou být použity k výdělečným účelům nebo vydávány za studijní, vědeckou nebo jinou tvůrčí činnost jiné osoby než autora. / Any retrieved information shall not be used for any commercial purposes or claimed as results of studying, scientific or any other creative activities of any person other than the author.

DSpace software copyright © 2002-2015  DuraSpace
Theme by 
@mire NV