2022 |
Pérez González de Martos, Alejandro Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources PhD Thesis Universitat Politècnica de València, 2022, (Advisors: Alfons Juan Ciscar and Alberto Sanchis Navarro). Links | BibTeX | Tags: automatic dubbing, cross-lingual voice cloning, educational resources, simultaneous machine interpretation, text-to-speech @phdthesis{aperez2022, title = {Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources}, author = {Pérez González de Martos, Alejandro}, url = {http://hdl.handle.net/10251/184019}, doi = {10.4995/Thesis/10251/184019}, year = {2022}, date = {2022-06-15}, school = {Universitat Politècnica de València}, note = {Advisors: Alfons Juan Ciscar and Alberto Sanchis Navarro}, keywords = {automatic dubbing, cross-lingual voice cloning, educational resources, simultaneous machine interpretation, text-to-speech}, pubstate = {published}, tppubtype = {phdthesis} } |
Pérez González de Martos, Alejandro ; Giménez Pastor, Adrià ; Jorge Cano, Javier ; Iranzo-Sánchez, Javier; Silvestre-Cerdà, Joan Albert; Garcés Díaz-Munío, Gonçal V; Baquero-Arnal, Pau; Sanchis Navarro, Alberto ; Civera Sáiz, Jorge ; Juan Ciscar, Alfons ; Turró Ribalta, Carlos Doblaje automático de vídeo-charlas educativas en UPV[Media] Inproceedings Proc. of VIII Congrés d'Innovació Educativa i Docència en Xarxa (IN-RED 2022), pp. 557–570, València (Spain), 2022. Abstract | Links | BibTeX | Tags: automatic dubbing, Automatic Speech Recognition, Machine Translation, OER, text-to-speech @inproceedings{deMartos2022, title = {Doblaje automático de vídeo-charlas educativas en UPV[Media]}, author = {Pérez González de Martos, Alejandro AND Giménez Pastor, Adrià AND Jorge Cano, Javier AND Javier Iranzo-Sánchez AND Joan Albert Silvestre-Cerdà AND Garcés Díaz-Munío, Gonçal V. AND Pau Baquero-Arnal AND Sanchis Navarro, Alberto AND Civera Sáiz, Jorge AND Juan Ciscar, Alfons AND Turró Ribalta, Carlos}, doi = {10.4995/INRED2022.2022.15844}, year = {2022}, date = {2022-01-01}, booktitle = {Proc. of VIII Congrés d'Innovació Educativa i Docència en Xarxa (IN-RED 2022)}, pages = {557--570}, address = {València (Spain)}, abstract = {More and more universities are banking on the production of digital content to support online or blended learning in higher education. Over the last years, the MLLP research group has been working closely with the UPV's ASIC media services in order to enrich educational multimedia resources through the application of natural language processing technologies including automatic speech recognition, machine translation and text-to-speech. In this work, we present the steps that are being followed for the comprehensive translation of these materials, specifically through (semi-)automatic dubbing by making use of state-of-the-art speaker-adaptive text-to-speech technologies.}, keywords = {automatic dubbing, Automatic Speech Recognition, Machine Translation, OER, text-to-speech}, pubstate = {published}, tppubtype = {inproceedings} } More and more universities are banking on the production of digital content to support online or blended learning in higher education. Over the last years, the MLLP research group has been working closely with the UPV's ASIC media services in order to enrich educational multimedia resources through the application of natural language processing technologies including automatic speech recognition, machine translation and text-to-speech. In this work, we present the steps that are being followed for the comprehensive translation of these materials, specifically through (semi-)automatic dubbing by making use of state-of-the-art speaker-adaptive text-to-speech technologies. |
Iranzo-Sánchez, Javier; Jorge, Javier; Pérez-González-de-Martos, Alejandro; Giménez, Adrià; Garcés Díaz-Munío, Gonçal V; Baquero-Arnal, Pau; Silvestre-Cerdà, Joan Albert; Civera, Jorge; Sanchis, Albert; Juan, Alfons MLLP-VRAIN UPV systems for the IWSLT 2022 Simultaneous Speech Translation and Speech-to-Speech Translation tasks Inproceedings Proc. of 19th Intl. Conf. on Spoken Language Translation (IWSLT 2022), pp. 255–264, Dublin (Ireland), 2022. Abstract | Links | BibTeX | Tags: Simultaneous Speech Translation, speech-to-speech translation @inproceedings{Iranzo-Sánchez2022b, title = {MLLP-VRAIN UPV systems for the IWSLT 2022 Simultaneous Speech Translation and Speech-to-Speech Translation tasks}, author = {Javier Iranzo-Sánchez and Javier Jorge and Alejandro Pérez-González-de-Martos and Adrià Giménez and Garcés Díaz-Munío, Gonçal V. and Pau Baquero-Arnal and Joan Albert Silvestre-Cerdà and Jorge Civera and Albert Sanchis and Alfons Juan}, doi = {10.18653/v1/2022.iwslt-1.22}, year = {2022}, date = {2022-01-01}, booktitle = {Proc. of 19th Intl. Conf. on Spoken Language Translation (IWSLT 2022)}, pages = {255--264}, address = {Dublin (Ireland)}, abstract = {This work describes the participation of the MLLP-VRAIN research group in the two shared tasks of the IWSLT 2022 conference: Simultaneous Speech Translation and Speech-to-Speech Translation. We present our streaming-ready ASR, MT and TTS systems for Speech Translation and Synthesis from English into German. Our submission combines these systems by means of a cascade approach paying special attention to data preparation and decoding for streaming inference.}, keywords = {Simultaneous Speech Translation, speech-to-speech translation}, pubstate = {published}, tppubtype = {inproceedings} } This work describes the participation of the MLLP-VRAIN research group in the two shared tasks of the IWSLT 2022 conference: Simultaneous Speech Translation and Speech-to-Speech Translation. We present our streaming-ready ASR, MT and TTS systems for Speech Translation and Synthesis from English into German. Our submission combines these systems by means of a cascade approach paying special attention to data preparation and decoding for streaming inference. |
Baquero-Arnal, Pau; Jorge, Javier; Giménez, Adrià; Iranzo-Sánchez, Javier; Pérez-González-de-Martos, Alejandro; Garcés Díaz-Munío, Gonçal V; Silvestre-Cerdà, Joan Albert; Civera, Jorge; Sanchis, Albert; Juan, Alfons MLLP-VRAIN Spanish ASR Systems for the Albayzin-RTVE 2020 Speech-To-Text Challenge: Extension Journal Article Applied Sciences, 12 (2), pp. 804, 2022. Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, Natural Language Processing, streaming @article{applsci1505192, title = {MLLP-VRAIN Spanish ASR Systems for the Albayzin-RTVE 2020 Speech-To-Text Challenge: Extension}, author = {Pau Baquero-Arnal and Javier Jorge and Adrià Giménez and Javier Iranzo-Sánchez and Alejandro Pérez-González-de-Martos and Garcés Díaz-Munío, Gonçal V. and Joan Albert Silvestre-Cerdà and Jorge Civera and Albert Sanchis and Alfons Juan}, doi = {10.3390/app12020804}, year = {2022}, date = {2022-01-01}, journal = {Applied Sciences}, volume = {12}, number = {2}, pages = {804}, abstract = {This paper describes the automatic speech recognition (ASR) systems built by the MLLP-VRAIN research group of Universitat Politècnica de València for the Albayzín-RTVE 2020 Speech-to-Text Challenge, and includes an extension of the work consisting in building and evaluating equivalent systems under the closed data conditions from the 2018 challenge. The primary system (p-streaming_1500ms_nlt) was a hybrid ASR system using streaming one-pass decoding with a context window of 1.5 seconds. This system achieved 16.0% WER on the test-2020 set. We also submitted three contrastive systems. From these, we highlight the system c2-streaming_600ms_t which, following a similar configuration as the primary system with a smaller context window of 0.6 s, scored 16.9% WER points on the same test set, with a measured empirical latency of 0.81±0.09 seconds (mean±stdev). That is, we obtained state-of-the-art latencies for high-quality automatic live captioning with a small WER degradation of 6% relative. As an extension, the equivalent closed-condition systems obtained 23.3% WER and 23.5% WER respectively. When evaluated with an unconstrained language model, we obtained 19.9% WER and 20.4% WER; i.e., not far behind the top-performing systems with only 5% of the full acoustic data and with the extra ability of being streaming-capable. Indeed, all of these streaming systems could be put into production environments for automatic captioning of live media streams.}, keywords = {Automatic Speech Recognition, Natural Language Processing, streaming}, pubstate = {published}, tppubtype = {article} } This paper describes the automatic speech recognition (ASR) systems built by the MLLP-VRAIN research group of Universitat Politècnica de València for the Albayzín-RTVE 2020 Speech-to-Text Challenge, and includes an extension of the work consisting in building and evaluating equivalent systems under the closed data conditions from the 2018 challenge. The primary system (p-streaming_1500ms_nlt) was a hybrid ASR system using streaming one-pass decoding with a context window of 1.5 seconds. This system achieved 16.0% WER on the test-2020 set. We also submitted three contrastive systems. From these, we highlight the system c2-streaming_600ms_t which, following a similar configuration as the primary system with a smaller context window of 0.6 s, scored 16.9% WER points on the same test set, with a measured empirical latency of 0.81±0.09 seconds (mean±stdev). That is, we obtained state-of-the-art latencies for high-quality automatic live captioning with a small WER degradation of 6% relative. As an extension, the equivalent closed-condition systems obtained 23.3% WER and 23.5% WER respectively. When evaluated with an unconstrained language model, we obtained 19.9% WER and 20.4% WER; i.e., not far behind the top-performing systems with only 5% of the full acoustic data and with the extra ability of being streaming-capable. Indeed, all of these streaming systems could be put into production environments for automatic captioning of live media streams. |
2021 |
Pérez, Alejandro; Garcés Díaz-Munío, Gonçal ; Giménez, Adrià; Silvestre-Cerdà, Joan Albert ; Sanchis, Albert; Civera, Jorge; Jiménez, Manuel; Turró, Carlos; Juan, Alfons Towards cross-lingual voice cloning in higher education Journal Article Engineering Applications of Artificial Intelligence, 105 , pp. 104413, 2021. Abstract | Links | BibTeX | Tags: cross-lingual voice conversion, educational resources, multilinguality, OER, text-to-speech @article{Pérez2021, title = {Towards cross-lingual voice cloning in higher education}, author = {Alejandro Pérez and Garcés Díaz-Munío, Gonçal and Adrià Giménez and Silvestre-Cerdà, Joan Albert and Albert Sanchis and Jorge Civera and Manuel Jiménez and Carlos Turró and Alfons Juan}, url = {https://doi.org/10.1016/j.engappai.2021.104413}, year = {2021}, date = {2021-10-01}, journal = {Engineering Applications of Artificial Intelligence}, volume = {105}, pages = {104413}, abstract = {The rapid progress of modern AI tools for automatic speech recognition and machine translation is leading to a progressive cost reduction to produce publishable subtitles for educational videos in multiple languages. Similarly, text-to-speech technology is experiencing large improvements in terms of quality, flexibility and capabilities. In particular, state-of-the-art systems are now capable of seamlessly dealing with multiple languages and speakers in an integrated manner, thus enabling lecturer's voice cloning in languages she/he might not even speak. This work is to report the experience gained on using such systems at the Universitat Politècnica de València (UPV), mainly as a guidance for other educational organizations willing to conduct similar studies. It builds on previous work on the UPV's main repository of educational videos, MediaUPV, to produce multilingual subtitles at scale and low cost. Here, a detailed account is given on how this work has been extended to also allow for massive machine dubbing of MediaUPV. This includes collecting 59 hours of clean speech data from UPV’s academic staff, and extending our production pipeline of subtitles with a state-of-the-art multilingual and multi-speaker text-to-speech system trained from the collected data. Our main result comes from an extensive, subjective evaluation of this system by lecturers contributing to data collection. In brief, it is shown that text-to-speech technology is not only mature enough for its application to MediaUPV, but also needed as soon as possible by students to improve its accessibility and bridge language barriers.}, keywords = {cross-lingual voice conversion, educational resources, multilinguality, OER, text-to-speech}, pubstate = {published}, tppubtype = {article} } The rapid progress of modern AI tools for automatic speech recognition and machine translation is leading to a progressive cost reduction to produce publishable subtitles for educational videos in multiple languages. Similarly, text-to-speech technology is experiencing large improvements in terms of quality, flexibility and capabilities. In particular, state-of-the-art systems are now capable of seamlessly dealing with multiple languages and speakers in an integrated manner, thus enabling lecturer's voice cloning in languages she/he might not even speak. This work is to report the experience gained on using such systems at the Universitat Politècnica de València (UPV), mainly as a guidance for other educational organizations willing to conduct similar studies. It builds on previous work on the UPV's main repository of educational videos, MediaUPV, to produce multilingual subtitles at scale and low cost. Here, a detailed account is given on how this work has been extended to also allow for massive machine dubbing of MediaUPV. This includes collecting 59 hours of clean speech data from UPV’s academic staff, and extending our production pipeline of subtitles with a state-of-the-art multilingual and multi-speaker text-to-speech system trained from the collected data. Our main result comes from an extensive, subjective evaluation of this system by lecturers contributing to data collection. In brief, it is shown that text-to-speech technology is not only mature enough for its application to MediaUPV, but also needed as soon as possible by students to improve its accessibility and bridge language barriers. |
Jorge, Javier; Giménez, Adrià; Baquero-Arnal, Pau; Iranzo-Sánchez, Javier; Pérez-González-de-Martos, Alejandro; Garcés Díaz-Munío, Gonçal V; Silvestre-Cerdà, Joan Albert; Civera, Jorge; Sanchis, Albert; Juan, Alfons MLLP-VRAIN Spanish ASR Systems for the Albayzin-RTVE 2020 Speech-To-Text Challenge Inproceedings Proc. of IberSPEECH 2021, pp. 118–122, Valladolid (Spain), 2021. Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, Natural Language Processing, streaming @inproceedings{Jorge2021, title = {MLLP-VRAIN Spanish ASR Systems for the Albayzin-RTVE 2020 Speech-To-Text Challenge}, author = {Javier Jorge and Adrià Giménez and Pau Baquero-Arnal and Javier Iranzo-Sánchez and Alejandro Pérez-González-de-Martos and Garcés Díaz-Munío, Gonçal V. and Joan Albert Silvestre-Cerdà and Jorge Civera and Albert Sanchis and Alfons Juan}, doi = {10.21437/IberSPEECH.2021-25}, year = {2021}, date = {2021-03-24}, booktitle = {Proc. of IberSPEECH 2021}, pages = {118--122}, address = {Valladolid (Spain)}, abstract = {1st place in IberSpeech-RTVE 2020 TV Speech-to-Text Challenge. [EN] This paper describes the automatic speech recognition (ASR) systems built by the MLLP-VRAIN research group of Universitat Politecnica de València for the Albayzin-RTVE 2020 Speech-to-Text Challenge. The primary system (p-streaming_1500ms_nlt) was a hybrid BLSTM-HMM ASR system using streaming one-pass decoding with a context window of 1.5 seconds and a linear combination of an n-gram, a LSTM, and a Transformer language model (LM). The acoustic model was trained on nearly 4,000 hours of speech data from different sources, using the MLLP's transLectures-UPV toolkit (TLK) and TensorFlow; whilst LMs were trained using SRILM (n-gram), CUED-RNNLM (LSTM) and Fairseq (Transformer), with up to 102G tokens. This system achieved 11.6% and 16.0% WER on the test-2018 and test-2020 sets, respectively. As it is streaming-enabled, it could be put into production environments for automatic captioning of live media streams, with a theoretical delay of 1.5 seconds. Along with the primary system, we also submitted three contrastive systems. From these, we highlight the system c2-streaming_600ms_t that, following the same configuration of the primary one, but using a smaller context window of 0.6 seconds and a Transformer LM, scored 12.3% and 16.9% WER points respectively on the same test sets, with a measured empirical latency of 0.81+-0.09 seconds (mean+-stdev). This is, we obtained state-of-the-art latencies for high-quality automatic live captioning with a small WER degradation of 6% relative. [CA] "Sistemes de reconeixement automàtic de la parla en castellà de MLLP-VRAIN per a la competició Albayzin-RTVE 2020 Speech-To-Text Challenge": En aquest article, es descriuen els sistemes de reconeixement automàtic de la parla (RAP) creats pel grup d'investigació MLLP-VRAIN de la Universitat Politecnica de València per a la competició Albayzin-RTVE 2020 Speech-to-Text Challenge. El sistema primari (p-streaming_1500ms_nlt) és un sistema de RAP híbrid BLSTM-HMM amb descodificació en temps real en una passada amb una finestra de context d'1,5 segons i una combinació lineal de models de llenguatge (ML) d'n-grames, LSTM i Transformer. El model acústic s'ha entrenat amb vora 4000 hores de parla transcrita de diferents fonts, usant el transLectures-UPV toolkit (TLK) del grup MLLP i TensorFlow; mentre que els ML s'han entrenat amb SRILM (n-grames), CUED-RNNLM (LSTM) i Fairseq (Transformer), amb 102G paraules (tokens). Aquest sistema ha obtingut 11,6 % i 16,0 % de WER en els conjunts test-2018 i test-2020, respectivament. És un sistema amb capacitat de temps real, que pot desplegar-se en producció per a subtitulació automàtica de fluxos audiovisuals en directe, amb un retard teòric d'1,5 segons. A banda del sistema primari, s'han presentat tres sistemes contrastius. D'aquests, destaquem el sistema c2-streaming_600ms_t que, amb la mateixa configuració que el sistema primari, però amb una finestra de context més reduïda de 0,6 segons i un ML Transformer, ha obtingut 12,3 % i 16,9 % de WER, respectivament, sobre els mateixos conjunts, amb una latència empírica mesurada de 0,81+-0,09 segons (mitjana+-desv). És a dir, s'han obtingut latències punteres per a subtitulació automàtica en directe d'alta qualitat amb una degradació del WER petita, del 6 % relatiu.}, keywords = {Automatic Speech Recognition, Natural Language Processing, streaming}, pubstate = {published}, tppubtype = {inproceedings} } 1st place in IberSpeech-RTVE 2020 TV Speech-to-Text Challenge. [EN] This paper describes the automatic speech recognition (ASR) systems built by the MLLP-VRAIN research group of Universitat Politecnica de València for the Albayzin-RTVE 2020 Speech-to-Text Challenge. The primary system (p-streaming_1500ms_nlt) was a hybrid BLSTM-HMM ASR system using streaming one-pass decoding with a context window of 1.5 seconds and a linear combination of an n-gram, a LSTM, and a Transformer language model (LM). The acoustic model was trained on nearly 4,000 hours of speech data from different sources, using the MLLP's transLectures-UPV toolkit (TLK) and TensorFlow; whilst LMs were trained using SRILM (n-gram), CUED-RNNLM (LSTM) and Fairseq (Transformer), with up to 102G tokens. This system achieved 11.6% and 16.0% WER on the test-2018 and test-2020 sets, respectively. As it is streaming-enabled, it could be put into production environments for automatic captioning of live media streams, with a theoretical delay of 1.5 seconds. Along with the primary system, we also submitted three contrastive systems. From these, we highlight the system c2-streaming_600ms_t that, following the same configuration of the primary one, but using a smaller context window of 0.6 seconds and a Transformer LM, scored 12.3% and 16.9% WER points respectively on the same test sets, with a measured empirical latency of 0.81+-0.09 seconds (mean+-stdev). This is, we obtained state-of-the-art latencies for high-quality automatic live captioning with a small WER degradation of 6% relative. [CA] "Sistemes de reconeixement automàtic de la parla en castellà de MLLP-VRAIN per a la competició Albayzin-RTVE 2020 Speech-To-Text Challenge": En aquest article, es descriuen els sistemes de reconeixement automàtic de la parla (RAP) creats pel grup d'investigació MLLP-VRAIN de la Universitat Politecnica de València per a la competició Albayzin-RTVE 2020 Speech-to-Text Challenge. El sistema primari (p-streaming_1500ms_nlt) és un sistema de RAP híbrid BLSTM-HMM amb descodificació en temps real en una passada amb una finestra de context d'1,5 segons i una combinació lineal de models de llenguatge (ML) d'n-grames, LSTM i Transformer. El model acústic s'ha entrenat amb vora 4000 hores de parla transcrita de diferents fonts, usant el transLectures-UPV toolkit (TLK) del grup MLLP i TensorFlow; mentre que els ML s'han entrenat amb SRILM (n-grames), CUED-RNNLM (LSTM) i Fairseq (Transformer), amb 102G paraules (tokens). Aquest sistema ha obtingut 11,6 % i 16,0 % de WER en els conjunts test-2018 i test-2020, respectivament. És un sistema amb capacitat de temps real, que pot desplegar-se en producció per a subtitulació automàtica de fluxos audiovisuals en directe, amb un retard teòric d'1,5 segons. A banda del sistema primari, s'han presentat tres sistemes contrastius. D'aquests, destaquem el sistema c2-streaming_600ms_t que, amb la mateixa configuració que el sistema primari, però amb una finestra de context més reduïda de 0,6 segons i un ML Transformer, ha obtingut 12,3 % i 16,9 % de WER, respectivament, sobre els mateixos conjunts, amb una latència empírica mesurada de 0,81+-0,09 segons (mitjana+-desv). És a dir, s'han obtingut latències punteres per a subtitulació automàtica en directe d'alta qualitat amb una degradació del WER petita, del 6 % relatiu. |
Garcés Díaz-Munío, Gonçal V; Silvestre-Cerdà, Joan Albert ; Jorge, Javier; Giménez, Adrià; Iranzo-Sánchez, Javier; Baquero-Arnal, Pau; Roselló, Nahuel; Pérez-González-de-Martos, Alejandro; Civera, Jorge; Sanchis, Albert; Juan, Alfons Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization Inproceedings Proc. Interspeech 2021, pp. 3695–3699, Brno (Czech Republic), 2021. Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, speech corpus, speech data filtering, speech data verbatimization @inproceedings{Garcés2021, title = {Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization}, author = {Garcés Díaz-Munío, Gonçal V. and Silvestre-Cerdà, Joan Albert and Javier Jorge and Adrià Giménez and Javier Iranzo-Sánchez and Pau Baquero-Arnal and Nahuel Roselló and Alejandro Pérez-González-de-Martos and Jorge Civera and Albert Sanchis and Alfons Juan}, url = {https://www.mllp.upv.es/wp-content/uploads/2021/09/europarl-asr-presentation-extended.pdf https://www.youtube.com/watch?v=Tc0gNSDdnQg&list=PLlePn-Yanvnc_LRhgmmaNmH12Bwm6BRsZ https://paperswithcode.com/paper/europarl-asr-a-large-corpus-of-parliamentary https://github.com/mllpresearch/Europarl-ASR}, doi = {10.21437/Interspeech.2021-1905}, year = {2021}, date = {2021-01-01}, booktitle = {Proc. Interspeech 2021}, journal = {Proc. Interspeech 2021}, pages = {3695--3699}, address = {Brno (Czech Republic)}, abstract = {[EN] We introduce Europarl-ASR, a large speech and text corpus of parliamentary debates including 1300 hours of transcribed speeches and 70 million tokens of text in English extracted from European Parliament sessions. The training set is labelled with the Parliament’s non-fully-verbatim official transcripts, time-aligned. As verbatimness is critical for acoustic model training, we also provide automatically noise-filtered and automatically verbatimized transcripts of all speeches based on speech data filtering and verbatimization techniques. Additionally, 18 hours of transcribed speeches were manually verbatimized to build reliable speaker-dependent and speaker-independent development/test sets for streaming ASR benchmarking. The availability of manual non-verbatim and verbatim transcripts for dev/test speeches makes this corpus useful for the assessment of automatic filtering and verbatimization techniques. This paper describes the corpus and its creation, and provides off-line and streaming ASR baselines for both the speaker-dependent and speaker-independent tasks using the three training transcription sets. The corpus is publicly released under an open licence. [CA] "Europarl-ASR: Un extens corpus parlamentari de referència per a reconeixement de la parla i filtratge/literalització de transcripcions": Presentem Europarl-ASR, un extens corpus de veu i text de debats parlamentaris amb 1300 hores d'intervencions transcrites i 70 milions de paraules de text en anglés extrets de sessions del Parlament Europeu. Les transcripcions oficials del Parlament Europeu, no literals, s'han sincronitzat per a tot el conjunt d'entrenament. Com que l'entrenament de models acústics requereix transcripcions com més literals millor, també s'han inclòs transcripcions filtrades i transcripcions literalitzades de totes les intervencions, basades en tècniques de filtratge i literalització automàtics. A més, s'han inclòs 18 hores de transcripcions literals revisades manualment per definir dos conjunts de validació i avaluació de referència per a reconeixement automàtic de la parla en temps real, amb oradors coneguts i amb oradors desconeguts. Pel fet de disposar de transcripcions literals i no literals, aquest corpus és també ideal per a l'anàlisi de tècniques de filtratge i de literalització. En aquest article, es descriu la creació del corpus i es proporcionen mesures de referència de reconeixement automàtic de la parla en temps real i en diferit, amb oradors coneguts i amb oradors desconeguts, usant els tres conjunts de transcripcions d'entrenament. El corpus es fa públic amb una llicència oberta.}, keywords = {Automatic Speech Recognition, speech corpus, speech data filtering, speech data verbatimization}, pubstate = {published}, tppubtype = {inproceedings} } [EN] We introduce Europarl-ASR, a large speech and text corpus of parliamentary debates including 1300 hours of transcribed speeches and 70 million tokens of text in English extracted from European Parliament sessions. The training set is labelled with the Parliament’s non-fully-verbatim official transcripts, time-aligned. As verbatimness is critical for acoustic model training, we also provide automatically noise-filtered and automatically verbatimized transcripts of all speeches based on speech data filtering and verbatimization techniques. Additionally, 18 hours of transcribed speeches were manually verbatimized to build reliable speaker-dependent and speaker-independent development/test sets for streaming ASR benchmarking. The availability of manual non-verbatim and verbatim transcripts for dev/test speeches makes this corpus useful for the assessment of automatic filtering and verbatimization techniques. This paper describes the corpus and its creation, and provides off-line and streaming ASR baselines for both the speaker-dependent and speaker-independent tasks using the three training transcription sets. The corpus is publicly released under an open licence. [CA] "Europarl-ASR: Un extens corpus parlamentari de referència per a reconeixement de la parla i filtratge/literalització de transcripcions": Presentem Europarl-ASR, un extens corpus de veu i text de debats parlamentaris amb 1300 hores d'intervencions transcrites i 70 milions de paraules de text en anglés extrets de sessions del Parlament Europeu. Les transcripcions oficials del Parlament Europeu, no literals, s'han sincronitzat per a tot el conjunt d'entrenament. Com que l'entrenament de models acústics requereix transcripcions com més literals millor, també s'han inclòs transcripcions filtrades i transcripcions literalitzades de totes les intervencions, basades en tècniques de filtratge i literalització automàtics. A més, s'han inclòs 18 hores de transcripcions literals revisades manualment per definir dos conjunts de validació i avaluació de referència per a reconeixement automàtic de la parla en temps real, amb oradors coneguts i amb oradors desconeguts. Pel fet de disposar de transcripcions literals i no literals, aquest corpus és també ideal per a l'anàlisi de tècniques de filtratge i de literalització. En aquest article, es descriu la creació del corpus i es proporcionen mesures de referència de reconeixement automàtic de la parla en temps real i en diferit, amb oradors coneguts i amb oradors desconeguts, usant els tres conjunts de transcripcions d'entrenament. El corpus es fa públic amb una llicència oberta.
|
Pérez-González-de-Martos, Alejandro; Iranzo-Sánchez, Javier; Giménez Pastor, Adrià ; Jorge, Javier; Silvestre-Cerdà, Joan-Albert; Civera, Jorge; Sanchis, Albert; Juan, Alfons Towards simultaneous machine interpretation Inproceedings Proc. Interspeech 2021, pp. 2277–2281, Brno (Czech Republic), 2021. Abstract | Links | BibTeX | Tags: cross-lingual voice cloning, incremental text-to-speech, simultaneous machine interpretation, speech-to-speech translation @inproceedings{Pérez-González-de-Martos2021, title = {Towards simultaneous machine interpretation}, author = {Alejandro Pérez-González-de-Martos and Javier Iranzo-Sánchez and Giménez Pastor, Adrià and Javier Jorge and Joan-Albert Silvestre-Cerdà and Jorge Civera and Albert Sanchis and Alfons Juan}, doi = {10.21437/Interspeech.2021-201}, year = {2021}, date = {2021-01-01}, booktitle = {Proc. Interspeech 2021}, journal = {Proc. Interspeech 2021}, pages = {2277--2281}, address = {Brno (Czech Republic)}, abstract = {Automatic speech-to-speech translation (S2S) is one of the most challenging speech and language processing tasks, especially when considering its application to real-time settings. Recent advances in streaming Automatic Speech Recognition (ASR), simultaneous Machine Translation (MT) and incremental neural Text-To-Speech (TTS) make it possible to develop real-time cascade S2S systems with greatly improved accuracy. On the way to simultaneous machine interpretation, a state-of-the-art cascade streaming S2S system is described and empirically assessed in the simultaneous interpretation of European Parliament debates. We pay particular attention to the TTS component, particularly in terms of speech naturalness under a variety of response-time settings, as well as in terms of speaker similarity for its cross-lingual voice cloning capabilities.}, keywords = {cross-lingual voice cloning, incremental text-to-speech, simultaneous machine interpretation, speech-to-speech translation}, pubstate = {published}, tppubtype = {inproceedings} } Automatic speech-to-speech translation (S2S) is one of the most challenging speech and language processing tasks, especially when considering its application to real-time settings. Recent advances in streaming Automatic Speech Recognition (ASR), simultaneous Machine Translation (MT) and incremental neural Text-To-Speech (TTS) make it possible to develop real-time cascade S2S systems with greatly improved accuracy. On the way to simultaneous machine interpretation, a state-of-the-art cascade streaming S2S system is described and empirically assessed in the simultaneous interpretation of European Parliament debates. We pay particular attention to the TTS component, particularly in terms of speech naturalness under a variety of response-time settings, as well as in terms of speaker similarity for its cross-lingual voice cloning capabilities. |
Pérez-González-de-Martos, Alejandro; Sanchis, Albert; Juan, Alfons VRAIN-UPV MLLP's system for the Blizzard Challenge 2021 Inproceedings Proc. of Blizzard Challenge 2021, 2021. Abstract | Links | BibTeX | Tags: Blizzard Challenge, HiFi-GAN, text-to-speech @inproceedings{Pérez-González-de-Martos2021b, title = {VRAIN-UPV MLLP's system for the Blizzard Challenge 2021}, author = {Alejandro Pérez-González-de-Martos and Albert Sanchis and Alfons Juan}, url = {http://hdl.handle.net/10251/192554 https://arxiv.org/abs/2110.15792 http://www.festvox.org/blizzard/blizzard2021.html}, year = {2021}, date = {2021-01-01}, booktitle = {Proc. of Blizzard Challenge 2021}, abstract = {This paper presents the VRAIN-UPV MLLP’s speech synthesis system for the SH1 task of the Blizzard Challenge 2021. The SH1 task consisted in building a Spanish text-to-speech system trained on (but not limited to) the corpus released by the Blizzard Challenge 2021 organization. It included 5 hours of studio-quality recordings from a native Spanish female speaker. In our case, this dataset was solely used to build a two-stage neural text-to-speech pipeline composed of a non-autoregressive acoustic model with explicit duration modeling and a HiFi-GAN neural vocoder. Our team is identified as J in the evaluation results. Our system obtained very good results in the subjective evaluation tests. Only one system among other 11 participants achieved better naturalness than ours. Concretely, it achieved a naturalness MOS of 3.61 compared to 4.21 for real samples.}, keywords = {Blizzard Challenge, HiFi-GAN, text-to-speech}, pubstate = {published}, tppubtype = {inproceedings} } This paper presents the VRAIN-UPV MLLP’s speech synthesis system for the SH1 task of the Blizzard Challenge 2021. The SH1 task consisted in building a Spanish text-to-speech system trained on (but not limited to) the corpus released by the Blizzard Challenge 2021 organization. It included 5 hours of studio-quality recordings from a native Spanish female speaker. In our case, this dataset was solely used to build a two-stage neural text-to-speech pipeline composed of a non-autoregressive acoustic model with explicit duration modeling and a HiFi-GAN neural vocoder. Our team is identified as J in the evaluation results. Our system obtained very good results in the subjective evaluation tests. Only one system among other 11 participants achieved better naturalness than ours. Concretely, it achieved a naturalness MOS of 3.61 compared to 4.21 for real samples. |
2017 |
Piqueras, Santiago ; Pérez, Alejandro ; Turró, Carlos ; Jiménez, Manuel ; Sanchis, Albert ; Civera, Jorge ; Juan, Alfons Hacia la traducción integral de vídeo charlas educativas Inproceedings Proc. of III Congreso Nacional de Innovación Educativa y Docencia en Red (IN-RED 2017), pp. 117–124, València (Spain), 2017. Abstract | Links | BibTeX | Tags: MOOCs, multilingual, translation @inproceedings{Piqueras2017, title = {Hacia la traducción integral de vídeo charlas educativas}, author = {Piqueras, Santiago and Pérez, Alejandro and Turró, Carlos and Jiménez, Manuel and Sanchis, Albert and Civera, Jorge and Juan, Alfons}, url = {http://ocs.editorial.upv.es/index.php/INRED/INRED2017/paper/view/6812}, year = {2017}, date = {2017-01-01}, booktitle = {Proc. of III Congreso Nacional de Innovación Educativa y Docencia en Red (IN-RED 2017)}, pages = {117--124}, address = {València (Spain)}, abstract = {More and more universities and educational institutions are banking on the production of technological resources for different uses in higher education. The MLLP research group has been working closely with the ASIC at UPV in order to enrich educational multimedia resources through the use of machine learning technologies, such as automatic speech recognition, machine translation or text-to-speech synthesis. In this work, developed under the framework of the UPV\'s Plan de Docencia en Red 2016-17, we present the application of innovative technologies in order to achieve the integral translation of educational videos.}, keywords = {MOOCs, multilingual, translation}, pubstate = {published}, tppubtype = {inproceedings} } More and more universities and educational institutions are banking on the production of technological resources for different uses in higher education. The MLLP research group has been working closely with the ASIC at UPV in order to enrich educational multimedia resources through the use of machine learning technologies, such as automatic speech recognition, machine translation or text-to-speech synthesis. In this work, developed under the framework of the UPV's Plan de Docencia en Red 2016-17, we present the application of innovative technologies in order to achieve the integral translation of educational videos. |
2015 |
Pérez González de Martos, Alejandro ; Silvestre-Cerdà, Joan Albert ; Valor Miró, Juan Daniel ; Civera, Jorge ; Juan, Alfons MLLP Transcription and Translation Platform Miscellaneous 2015, (Short paper for demo presentation accepted at 10th European Conf. on Technology Enhanced Learning (EC-TEL 2015), Toledo (Spain), 2015.). Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, Docencia en Red, Document translation, Efficient video subtitling, Machine Translation, MLLP, Post-editing, Video Lectures @misc{mllpttp, title = {MLLP Transcription and Translation Platform}, author = {Pérez González de Martos, Alejandro and Silvestre-Cerdà, Joan Albert and Valor Miró, Juan Daniel and Civera, Jorge and Juan, Alfons}, url = {http://hdl.handle.net/10251/65747 http://www.mllp.upv.es/wp-content/uploads/2015/09/ttp_platform_demo_ectel2015.pdf http://ectel2015.httc.de/index.php?id=722}, year = {2015}, date = {2015-09-16}, booktitle = {Tenth European Conference On Technology Enhanced Learning (EC-TEL 2015)}, abstract = {This paper briefly presents the main features of MLLP’s Transcription and Translation Platform, which uses state-of-the-art automatic speech recognition and machine translation systems to generate multilingual subtitles of educational audiovisual and textual content. It has proven to reduce user effort up to 1/3 of the time needed to generate transcriptions and translations from scratch.}, note = {Short paper for demo presentation accepted at 10th European Conf. on Technology Enhanced Learning (EC-TEL 2015), Toledo (Spain), 2015.}, keywords = {Automatic Speech Recognition, Docencia en Red, Document translation, Efficient video subtitling, Machine Translation, MLLP, Post-editing, Video Lectures}, pubstate = {published}, tppubtype = {misc} } This paper briefly presents the main features of MLLP’s Transcription and Translation Platform, which uses state-of-the-art automatic speech recognition and machine translation systems to generate multilingual subtitles of educational audiovisual and textual content. It has proven to reduce user effort up to 1/3 of the time needed to generate transcriptions and translations from scratch. |
2014 |
Valor Miró, Juan Daniel ; Spencer, R N; Pérez González de Martos, A; Garcés Díaz-Munío, G; Turró, C; Civera, J; Juan, A Evaluación del proceso de revisión de transcripciones automáticas para vídeos Polimedia Inproceedings Proc. of I Jornadas de Innovación Educativa y Docencia en Red (IN-RED 2014), pp. 272–278, Valencia (Spain), 2014. Abstract | Links | BibTeX | Tags: ASR, Docencia en Red, evaluaciones, Polimedia, transcripciones @inproceedings{Valor14-InRed, title = {Evaluación del proceso de revisión de transcripciones automáticas para vídeos Polimedia}, author = {Valor Miró, Juan Daniel and Spencer, R.N. and Pérez González de Martos, A. and Garcés Díaz-Munío, G. and Turró, C. and Civera, J. and Juan, A.}, url = {http://hdl.handle.net/10251/40404 http://dx.doi.org/10.4995/INRED.2014 http://www.mllp.upv.es/wp-content/uploads/2015/04/paper1.pdf https://www.mllp.upv.es/wp-content/uploads/2019/09/poster.pdf}, year = {2014}, date = {2014-07-01}, booktitle = {Proc. of I Jornadas de Innovación Educativa y Docencia en Red (IN-RED 2014)}, pages = {272--278}, address = {Valencia (Spain)}, abstract = {[EN] Video lectures are a tool of proven value and wide acceptance in universities which is leading to video lecture platforms like poliMèdia (Universitat Politècnica de València). transLectures is an EU project that generates high-quality automatic transcriptions and translations for the poliMèdia platform, and improves them by using massive adaptation and intelligent interaction techniques. In this paper we present the evaluation with lecturers carried out under the Docència en Xarxa 2012-2013 action plan with the aim of studying the process of transcription post-editing, in contrast with transcribing from scratch. [ES] Los vídeos docentes son una herramienta de demostrada utilidad y gran aceptación en el mundo universitario que está dando lugar a plataformas de vídeos docentes como poliMèdia (Universitat Politècnica de València). transLectures es un proyecto europeo que genera transcripciones y traducciones automáticas de alta calidad para la plataforma poliMèdia, mediante técnicas de adaptación masiva e interacción inteligente. En este artículo presentamos la evaluación con profesores que se realizó en el marco del plan de acción Docència en Xarxa 2012-2013, con el objetivo de estudiar el proceso de supervisión de transcripciones, comparándolo con la obtención de la transcripción sin disponer de una transcripción automática previa. [CA] Els vídeos docents són una eina d'utilitat demostrada i amb gran acceptació en el món universitari que està donant lloc a plataformes de vídeos docents com poliMèdia (Universitat Politècnica de València). transLectures és un projecte europeu que genera transcripcions i traduccions automàtiques d'alta qualitat per a la plataforma poliMèdia, mitjançant tècniques d'adaptació massiva i interacció intel·ligent. En aquest article presentem l'avaluació amb professors que es realitzà en el marc del pla d'acció Docència en Xarxa 2012-2013, amb l'objectiu d'estudiar el procés de supervisió de transcripcions, comparant-lo amb l'obtenció de la transcripció sense disposar d'una transcripció automàtica prèvia.}, keywords = {ASR, Docencia en Red, evaluaciones, Polimedia, transcripciones}, pubstate = {published}, tppubtype = {inproceedings} } [EN] Video lectures are a tool of proven value and wide acceptance in universities which is leading to video lecture platforms like poliMèdia (Universitat Politècnica de València). transLectures is an EU project that generates high-quality automatic transcriptions and translations for the poliMèdia platform, and improves them by using massive adaptation and intelligent interaction techniques. In this paper we present the evaluation with lecturers carried out under the Docència en Xarxa 2012-2013 action plan with the aim of studying the process of transcription post-editing, in contrast with transcribing from scratch. [ES] Los vídeos docentes son una herramienta de demostrada utilidad y gran aceptación en el mundo universitario que está dando lugar a plataformas de vídeos docentes como poliMèdia (Universitat Politècnica de València). transLectures es un proyecto europeo que genera transcripciones y traducciones automáticas de alta calidad para la plataforma poliMèdia, mediante técnicas de adaptación masiva e interacción inteligente. En este artículo presentamos la evaluación con profesores que se realizó en el marco del plan de acción Docència en Xarxa 2012-2013, con el objetivo de estudiar el proceso de supervisión de transcripciones, comparándolo con la obtención de la transcripción sin disponer de una transcripción automática previa. [CA] Els vídeos docents són una eina d'utilitat demostrada i amb gran acceptació en el món universitari que està donant lloc a plataformes de vídeos docents com poliMèdia (Universitat Politècnica de València). transLectures és un projecte europeu que genera transcripcions i traduccions automàtiques d'alta qualitat per a la plataforma poliMèdia, mitjançant tècniques d'adaptació massiva i interacció intel·ligent. En aquest article presentem l'avaluació amb professors que es realitzà en el marc del pla d'acció Docència en Xarxa 2012-2013, amb l'objectiu d'estudiar el procés de supervisió de transcripcions, comparant-lo amb l'obtenció de la transcripció sense disposar d'una transcripció automàtica prèvia. |
Pérez-González-de-Martos, A; Silvestre-Cerdá, J A; Rihtar, M; Juan, A; Civera, J Using Automatic Speech Transcriptions in Lecture Recommendation Systems Inproceedings Proc. of VIII Jornadas en Tecnología del Habla and IV Iberian SLTech Workshop (IberSpeech 2014), Las Palmas de Gran Canaria (Spain), 2014. @inproceedings{PerSil14, title = {Using Automatic Speech Transcriptions in Lecture Recommendation Systems}, author = {A. Pérez-González-de-Martos and J. A. Silvestre-Cerdá and M. Rihtar and A. Juan and J. Civera}, url = {http://www.mllp.upv.es/wp-content/uploads/2015/04/lavie_is2014_camready1.pdf}, year = {2014}, date = {2014-01-01}, booktitle = {Proc. of VIII Jornadas en Tecnología del Habla and IV Iberian SLTech Workshop (IberSpeech 2014)}, address = {Las Palmas de Gran Canaria (Spain)}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Valor Miró, Juan Daniel ; Spencer, R N; Pérez González de Martos, A; Garcés Díaz-Munío, G; Turró, C; Civera, J; Juan, A Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures Journal Article Open Learning: The Journal of Open, Distance and e-Learning, 29 (1), pp. 72–85, 2014. Abstract | Links | BibTeX | Tags: @article{doi:10.1080/02680513.2014.909722, title = {Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures}, author = {Valor Miró, Juan Daniel and Spencer, R.N. and Pérez González de Martos, A. and Garcés Díaz-Munío, G. and Turró, C. and Civera, J. and Juan, A.}, url = {http://hdl.handle.net/10251/55925 http://dx.doi.org/10.1080/02680513.2014.909722 http://www.mllp.upv.es/wp-content/uploads/2015/04/author_version.pdf}, year = {2014}, date = {2014-01-01}, journal = {Open Learning: The Journal of Open, Distance and e-Learning}, volume = {29}, number = {1}, pages = {72--85}, abstract = {[EN] Video lectures are fast becoming an everyday educational resource in higher education. They are being incorporated into existing university curricula around the world, while also emerging as a key component of the open education movement. In 2007 the Universitat Politècnica de València (UPV) implemented its poliMèdia lecture capture system for the creation and publication of quality educational video content and now has a collection of over 10,000 video objects. In 2011 it embarked on the EU-subsidised transLectures project to add automatic subtitles to these videos in both Spanish and other languages. By doing so, it allows access to their educational content by non-native speakers and the deaf and hard-of-hearing, as well as enabling advanced repository management functions. In this paper, following a short introduction to poliMèdia, transLectures and Docència en Xarxa, the UPV's action plan to boost the use of digital resources at the university, we will discuss the three-stage evaluation process carried out with the collaboration of UPV lecturers to find the best interaction protocol for the task of post-editing automatic subtitles. [CA] "Avaluació d'interfícies intel·ligents per a la postedició de transcripcions automàtiques de vídeos docents en línia": Els vídeos docents s'estan convertint en un recurs d'ús quotidià en l'educació superior. Estan entrant en els plans d'estudis universitaris de tot el món, al mateix temps que es defineixen com un component clau del moviment de l'educació lliure. L'any 2007, la Universitat Politècnica de València (UPV) implementà el seu sistema d'enregistrament de classes poliMèdia, que permet la producció i publicació de continguts audiovisuals d'alta qualitat, i que ja acumula més de 10.000 vídeos. L'any 2011, la UPV va entrar en el projecte europeu transLectures per a afegir subtítols automàtics a aquests vídeos en castellà, català i altres llengües. Així, es facilita l'accés a aquests continguts educatius per part de parlants d'altres llengües i de persones sordes o amb dificultats auditives, i també es proporcionen funcions avançades de gestió del repositori. En aquest article, després de presentar poliMèdia, transLectures i Docència en Xarxa (el pla d'acció de la UPV per a impulsar l'ús de recursos digitals en la universitat), explicarem el procés d'avaluació en tres fases que s'ha realitzat amb la col·laboració de professors de la UPV per a trobar el millor protocol d'interacció per a la postedició de subtítols automàtics.}, keywords = {}, pubstate = {published}, tppubtype = {article} } [EN] Video lectures are fast becoming an everyday educational resource in higher education. They are being incorporated into existing university curricula around the world, while also emerging as a key component of the open education movement. In 2007 the Universitat Politècnica de València (UPV) implemented its poliMèdia lecture capture system for the creation and publication of quality educational video content and now has a collection of over 10,000 video objects. In 2011 it embarked on the EU-subsidised transLectures project to add automatic subtitles to these videos in both Spanish and other languages. By doing so, it allows access to their educational content by non-native speakers and the deaf and hard-of-hearing, as well as enabling advanced repository management functions. In this paper, following a short introduction to poliMèdia, transLectures and Docència en Xarxa, the UPV's action plan to boost the use of digital resources at the university, we will discuss the three-stage evaluation process carried out with the collaboration of UPV lecturers to find the best interaction protocol for the task of post-editing automatic subtitles. [CA] "Avaluació d'interfícies intel·ligents per a la postedició de transcripcions automàtiques de vídeos docents en línia": Els vídeos docents s'estan convertint en un recurs d'ús quotidià en l'educació superior. Estan entrant en els plans d'estudis universitaris de tot el món, al mateix temps que es defineixen com un component clau del moviment de l'educació lliure. L'any 2007, la Universitat Politècnica de València (UPV) implementà el seu sistema d'enregistrament de classes poliMèdia, que permet la producció i publicació de continguts audiovisuals d'alta qualitat, i que ja acumula més de 10.000 vídeos. L'any 2011, la UPV va entrar en el projecte europeu transLectures per a afegir subtítols automàtics a aquests vídeos en castellà, català i altres llengües. Així, es facilita l'accés a aquests continguts educatius per part de parlants d'altres llengües i de persones sordes o amb dificultats auditives, i també es proporcionen funcions avançades de gestió del repositori. En aquest article, després de presentar poliMèdia, transLectures i Docència en Xarxa (el pla d'acció de la UPV per a impulsar l'ús de recursos digitals en la universitat), explicarem el procés d'avaluació en tres fases que s'ha realitzat amb la col·laboració de professors de la UPV per a trobar el millor protocol d'interacció per a la postedició de subtítols automàtics. |
2013 |
Silvestre-Cerdà, Joan Albert; Pérez, Alejandro; Jiménez, Manuel; Turró, Carlos; Juan, Alfons; Civera, Jorge A System Architecture to Support Cost-Effective Transcription and Translation of Large Video Lecture Repositories Inproceedings Proc. of the IEEE Intl. Conf. on Systems, Man, and Cybernetics SMC 2013 , pp. 3994-3999, Manchester (UK), 2013. Abstract | Links | BibTeX | Tags: Accessibility, Automatic Speech Recognition, Education, Intelligent Interaction, Language Technologies, Machine Translation, Massive Adaptation, Multilingualism, Opencast Matterhorn, Video Lectures @inproceedings{Silvestre-Cerdà2013, title = {A System Architecture to Support Cost-Effective Transcription and Translation of Large Video Lecture Repositories}, author = {Joan Albert Silvestre-Cerdà and Alejandro Pérez and Manuel Jiménez and Carlos Turró and Alfons Juan and Jorge Civera}, url = {http://dx.doi.org/10.1109/SMC.2013.682}, year = {2013}, date = {2013-01-01}, booktitle = {Proc. of the IEEE Intl. Conf. on Systems, Man, and Cybernetics SMC 2013 }, pages = {3994-3999}, address = {Manchester (UK)}, abstract = {Online video lecture repositories are rapidly growing and becoming established as fundamental knowledge assets. However, most lectures are neither transcribed nor translated because of the lack of cost-effective solutions that can give accurate enough results. In this paper, we describe a system architecture that supports the cost-effective transcription and translation of large video lecture repositories. This architecture has been adopted in the EU project transLectures and is now being tested on a repository of more than 9000 video lectures at the Universitat Politecnica de Valencia. Following a brief description of this repository and of the transLectures project, we describe the proposed system architecture in detail. We also report empirical results on the quality of the transcriptions and translations currently being maintained and steadily improved.}, keywords = {Accessibility, Automatic Speech Recognition, Education, Intelligent Interaction, Language Technologies, Machine Translation, Massive Adaptation, Multilingualism, Opencast Matterhorn, Video Lectures}, pubstate = {published}, tppubtype = {inproceedings} } Online video lecture repositories are rapidly growing and becoming established as fundamental knowledge assets. However, most lectures are neither transcribed nor translated because of the lack of cost-effective solutions that can give accurate enough results. In this paper, we describe a system architecture that supports the cost-effective transcription and translation of large video lecture repositories. This architecture has been adopted in the EU project transLectures and is now being tested on a repository of more than 9000 video lectures at the Universitat Politecnica de Valencia. Following a brief description of this repository and of the transLectures project, we describe the proposed system architecture in detail. We also report empirical results on the quality of the transcriptions and translations currently being maintained and steadily improved. |
2012 |
Silvestre-Cerdà, Joan Albert ; Del Agua, Miguel ; Garcés, Gonçal; Gascó, Guillem; Giménez-Pastor, Adrià; Martínez, Adrià; Pérez González de Martos, Alejandro ; Sánchez, Isaías; Serrano Martínez-Santos, Nicolás ; Spencer, Rachel; Valor Miró, Juan Daniel ; Andrés-Ferrer, Jesús; Civera, Jorge; Sanchís, Alberto; Juan, Alfons transLectures Inproceedings Proceedings (Online) of IberSPEECH 2012, pp. 345–351, Madrid (Spain), 2012. Abstract | Links | BibTeX | Tags: Accessibility, Automatic Speech Recognition, Education, Intelligent Interaction, Language Technologies, Machine Translation, Massive Adaptation, Multilingualism, Opencast Matterhorn, Video Lectures @inproceedings{Silvestre-Cerdà2012b, title = {transLectures}, author = {Silvestre-Cerdà, Joan Albert and Del Agua, Miguel and Gonçal Garcés and Guillem Gascó and Adrià Giménez-Pastor and Adrià Martínez and Pérez González de Martos, Alejandro and Isaías Sánchez and Serrano Martínez-Santos, Nicolás and Rachel Spencer and Valor Miró, Juan Daniel and Jesús Andrés-Ferrer and Jorge Civera and Alberto Sanchís and Alfons Juan}, url = {http://hdl.handle.net/10251/37290 http://lorien.die.upm.es/~lapiz/rtth/JORNADAS/VII/IberSPEECH2012_OnlineProceedings.pdf https://web.archive.org/web/20130609073144/http://iberspeech2012.ii.uam.es/IberSPEECH2012_OnlineProceedings.pdf http://www.mllp.upv.es/wp-content/uploads/2015/04/1209IberSpeech.pdf}, year = {2012}, date = {2012-11-22}, booktitle = {Proceedings (Online) of IberSPEECH 2012}, pages = {345--351}, address = {Madrid (Spain)}, abstract = {[EN] transLectures (Transcription and Translation of Video Lectures) is an EU STREP project in which advanced automatic speech recognition and machine translation techniques are being tested on large video lecture repositories. The project began in November 2011 and will run for three years. This paper will outline the project's main motivation and objectives, and give a brief description of the two main repositories being considered: VideoLectures.NET and poliMèdia. The first results obtained by the UPV group for the poliMedia repository will also be provided. [CA] transLectures (Transcription and Translation of Video Lectures) és un projecte del 7PM de la Unió Europea en el qual s'estan posant a prova tècniques avançades de reconeixement automàtic de la parla i de traducció automàtica sobre grans repositoris digitals de vídeos docents. El projecte començà al novembre de 2011 i tindrà una duració de tres anys. En aquest article exposem la motivació i els objectius del projecte, i descrivim breument els dos repositoris principals sobre els quals es treballa: VideoLectures.NET i poliMèdia. També oferim els primers resultats obtinguts per l'equip de la UPV al repositori poliMèdia.}, keywords = {Accessibility, Automatic Speech Recognition, Education, Intelligent Interaction, Language Technologies, Machine Translation, Massive Adaptation, Multilingualism, Opencast Matterhorn, Video Lectures}, pubstate = {published}, tppubtype = {inproceedings} } [EN] transLectures (Transcription and Translation of Video Lectures) is an EU STREP project in which advanced automatic speech recognition and machine translation techniques are being tested on large video lecture repositories. The project began in November 2011 and will run for three years. This paper will outline the project's main motivation and objectives, and give a brief description of the two main repositories being considered: VideoLectures.NET and poliMèdia. The first results obtained by the UPV group for the poliMedia repository will also be provided. [CA] transLectures (Transcription and Translation of Video Lectures) és un projecte del 7PM de la Unió Europea en el qual s'estan posant a prova tècniques avançades de reconeixement automàtic de la parla i de traducció automàtica sobre grans repositoris digitals de vídeos docents. El projecte començà al novembre de 2011 i tindrà una duració de tres anys. En aquest article exposem la motivació i els objectius del projecte, i descrivim breument els dos repositoris principals sobre els quals es treballa: VideoLectures.NET i poliMèdia. També oferim els primers resultats obtinguts per l'equip de la UPV al repositori poliMèdia. |
Valor Miró, Juan Daniel ; Pérez González de Martos, Alejandro ; Civera, Jorge ; Juan, Alfons Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform Incollection Advances in Speech and Language Technologies for Iberian Languages (IberSpeech 2012), 328 , pp. 237-246, Springer Berlin Heidelberg, 2012, ISBN: 978-3-642-35291-1, (doi: 10.1007/978-3-642-35292-8_20). Links | BibTeX | Tags: Google N-Gram, Language Modeling, Linear Combination, Opencast Matterhorn, Speech Recognition @incollection{Valor2012, title = {Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform}, author = {Valor Miró, Juan Daniel and Pérez González de Martos, Alejandro and Civera, Jorge and Juan, Alfons}, url = {http://hdl.handle.net/10251/35190 http://www.mllp.upv.es/wp-content/uploads/2015/04/paper2.pdf}, isbn = {978-3-642-35291-1}, year = {2012}, date = {2012-01-01}, booktitle = {Advances in Speech and Language Technologies for Iberian Languages (IberSpeech 2012)}, volume = {328}, pages = {237-246}, publisher = {Springer Berlin Heidelberg}, series = {Communications in Computer and Information Science}, note = {doi: 10.1007/978-3-642-35292-8_20}, keywords = {Google N-Gram, Language Modeling, Linear Combination, Opencast Matterhorn, Speech Recognition}, pubstate = {published}, tppubtype = {incollection} } |
Publications
2022 |
Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources PhD Thesis Universitat Politècnica de València, 2022, (Advisors: Alfons Juan Ciscar and Alberto Sanchis Navarro). |
Doblaje automático de vídeo-charlas educativas en UPV[Media] Inproceedings Proc. of VIII Congrés d'Innovació Educativa i Docència en Xarxa (IN-RED 2022), pp. 557–570, València (Spain), 2022. |
MLLP-VRAIN UPV systems for the IWSLT 2022 Simultaneous Speech Translation and Speech-to-Speech Translation tasks Inproceedings Proc. of 19th Intl. Conf. on Spoken Language Translation (IWSLT 2022), pp. 255–264, Dublin (Ireland), 2022. |
MLLP-VRAIN Spanish ASR Systems for the Albayzin-RTVE 2020 Speech-To-Text Challenge: Extension Journal Article Applied Sciences, 12 (2), pp. 804, 2022. |
2021 |
Towards cross-lingual voice cloning in higher education Journal Article Engineering Applications of Artificial Intelligence, 105 , pp. 104413, 2021. |
MLLP-VRAIN Spanish ASR Systems for the Albayzin-RTVE 2020 Speech-To-Text Challenge Inproceedings Proc. of IberSPEECH 2021, pp. 118–122, Valladolid (Spain), 2021. |
Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization Inproceedings Proc. Interspeech 2021, pp. 3695–3699, Brno (Czech Republic), 2021. |
Towards simultaneous machine interpretation Inproceedings Proc. Interspeech 2021, pp. 2277–2281, Brno (Czech Republic), 2021. |
VRAIN-UPV MLLP's system for the Blizzard Challenge 2021 Inproceedings Proc. of Blizzard Challenge 2021, 2021. |
2017 |
Hacia la traducción integral de vídeo charlas educativas Inproceedings Proc. of III Congreso Nacional de Innovación Educativa y Docencia en Red (IN-RED 2017), pp. 117–124, València (Spain), 2017. |
2015 |
MLLP Transcription and Translation Platform Miscellaneous 2015, (Short paper for demo presentation accepted at 10th European Conf. on Technology Enhanced Learning (EC-TEL 2015), Toledo (Spain), 2015.). |
2014 |
Evaluación del proceso de revisión de transcripciones automáticas para vídeos Polimedia Inproceedings Proc. of I Jornadas de Innovación Educativa y Docencia en Red (IN-RED 2014), pp. 272–278, Valencia (Spain), 2014. |
Using Automatic Speech Transcriptions in Lecture Recommendation Systems Inproceedings Proc. of VIII Jornadas en Tecnología del Habla and IV Iberian SLTech Workshop (IberSpeech 2014), Las Palmas de Gran Canaria (Spain), 2014. |
Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures Journal Article Open Learning: The Journal of Open, Distance and e-Learning, 29 (1), pp. 72–85, 2014. |
2013 |
A System Architecture to Support Cost-Effective Transcription and Translation of Large Video Lecture Repositories Inproceedings Proc. of the IEEE Intl. Conf. on Systems, Man, and Cybernetics SMC 2013 , pp. 3994-3999, Manchester (UK), 2013. |
2012 |
transLectures Inproceedings Proceedings (Online) of IberSPEECH 2012, pp. 345–351, Madrid (Spain), 2012. |
Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform Incollection Advances in Speech and Language Technologies for Iberian Languages (IberSpeech 2012), 328 , pp. 237-246, Springer Berlin Heidelberg, 2012, ISBN: 978-3-642-35291-1, (doi: 10.1007/978-3-642-35292-8_20). |