posted by user: grupocole || 3155 views || tracked by 3 users: [display]

MT-Eval 2016 : Translation evaluation: From fragmented tools and data sets to an integrated ecosystem

FacebookTwitterLinkedInGoogle

Link: http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/
 
When May 24, 2016 - May 24, 2016
Where Portorož, Slovenia
Submission Deadline Feb 15, 2016
Notification Due Mar 1, 2016
Final Version Due Mar 31, 2016
Categories    NLP
 

Call For Papers

LREC 2016 Workshop

Translation evaluation:
From fragmented tools and data sets to an integrated ecosystem

24 May 2016, Portorož, Slovenia

http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/

Deadline for submissions: 15 February 2016


This workshop takes an in-depth look at an area of ever-increasing
importance: approaches, tools and data support for the evaluation of human
translation (HT) and machine translation (MT), with a focus on MT. Two clear
trends have emerged over the past several years. The first trend involves
standardising evaluations in research through large shared tasks in which
actual translations are compared to reference translations using automatic
metrics and/or human ranking. The second trend focuses on achieving high
quality translations with the help of increasingly complex data sets that
contain many levels of annotation based on sophisticated quality metrics
%G–%@ often organised in the context of smaller shared tasks. In
industry, we also observe an increased interest in workflows for high
quality outbound translation that combine Translation Memory (TM)/Machine
Translation and post-editing. In stark contrast to this trend to quality
translation (QT) and its inherent overall approach and complexity, the data
and tooling landscapes remain rather heterogeneous, uncoordinated and not
interoperable.

The event will bring together MT and HT researchers, users and providers of
tools, and users and providers of manual and automatic evaluation
methodologies currently used for the purpose of evaluating HT and MT
systems. The key objective of the workshop is to initiate a dialogue and
discuss whether the current approach involving a diverse and heterogeneous
set of data, tools and evaluation methodologies is appropriate enough or if
the community should, instead, collaborate towards building an integrated
ecosystem that provides better and more sustainable access to data sets,
evaluation workflows, approaches and metrics and supporting processes such
as annotation, ranking and so on.

The workshop is meant to stimulate a dialogue about the commonalities,
similarities and differences of the existing solutions in the three areas
(1) tools, (2) methodologies, (3) data sets. A key question concerns the
high level of flexibility and lack of interoperability of heterogeneous
approaches, while a homogeneous approach would provide less flexibility but
higher interoperability. How much flexibility and interoperability does the
MT/HT research community need? How much does it want?


TOPICS OF INTEREST INCLUDE BUT ARE NOT LIMITED TO
---------------------------------------------------------------------------------------------------------
- MT/HT evaluation methodologies (incl. scoring mechanisms, integrated
metrics)
- Benchmarks for MT evaluation
- Data and annotation formats for the evaluation of MT/HT
- Workbenches, tools, technologies for the evaluation of MT/HT
(incl. specialised workflows)
- Integration of MT/TM, and terminology in industrial evaluation scenarios
- Evaluation ecosystems
- Annotation concepts such as MQM, DQF and their implementation in MT
evaluation processes

We invite contributions on the topics mentioned above and any related topics
of interest. The workshop website provides some additional information.


Important dates
------------------------------
- Publication of the call for papers: 10 December 2015
- Submissions due: 15 February 2016
- Notification of acceptance: 1 March 2016
- Final version of accepted papers: 31 March 2016
- Final programme and online proceedings: 15 April 2016
- Workshop: 24 May 2016 (this event will be a full-day workshop)


Submission
------------------------
Please submit your papers at https://www.softconf.com/lrec2016/MTEVAL/
before the deadline of 15 February 2016. Accepted papers will be presented
as oral presentations or as posters. All accepted papers will be published
in the workshop proceedings.

Papers should be formatted according to the stylesheet soon to be provided
on the LREC 2016 website and should not exceed 8 pages, including references
and appendices. Papers should be submitted in PDF format through the URL
mentioned above.

When submitting a paper, authors will be asked to provide essential
information about resources (in a broad sense, i.e., also technologies,
standards, evaluation kits, etc.) that have been used for the work described
in the paper or are a new result of your research. Moreover, ELRA encourages
all LREC authors to share the described LRs (data, tools, services, etc.) to
enable their reuse and replicability of experiments (including evaluation
ones).


Programme committee
----------------------------------------
Nora Aranberri, University of the Basque Country, Spain
Ondrej Bojar, Charles University in Prague, Czech Republic
Aljoscha Burchardt, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Christian Dugast, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Marcello Federico, Fondazione Bruno Kessler (FBK), Italy
Christian Federmann, Microsoft, USA
Rosa Gaudio, Higher Functions, Portugal
Josef van Genabith, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Barry Haddow, University of Edinburgh, UK
Jan Hajic, Charles University in Prague, Czech Republic
Kim Harris, text&form, Germany
Matthias Heyn, SDL, Belgium
Philipp Koehn, Johns Hopkins University, USA, and University of Edinburgh, UK
Christian Lieske, SAP, Germany
Lena Marg, Welocalize, UK
Katrin Marheinecke, text&form, Germany
Matteo Negri, Fondazione Bruno Kessler (FBK), Italy
Martin Popel, Charles University in Prague, Czech Republic
Jörg Porsiel, Volkswagen AG, Germany
Georg Rehm, Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI),
Germany
Rubén Rodriguez de la Fuente, PayPal, Spain
Lucia Specia, University of Sheffield, UK
Marco Turchi, Fondazione Bruno Kessler (FBK), Italy
Hans Uszkoreit, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany

http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/

This workshop is a joint activity of the EU projects QT21 and CRACKER.

Related Resources

ICPE 2024   15th ACM/SPEC International Conference on Performance Engineering
COMIT 2024   8th International Conference on Computer Science and Information Technology
ICTAI 2024   36th International Conference on Tools with Artificial Intelligence
SIPRO 2024   10th International Conference on Signal and Image Processing
PPSN 2024   18th International Conference on Parallel Problem Solving From Nature
AISC 2024   12th International Conference on Artificial Intelligence, Soft Computing
EAMT 2024   The 25th Annual Conference of The European Association for Machine Translation
NLPTT 2024   5th International Conference on NLP Trends & Technologies
IJME 2024   International Journal of Microelectronics Engineering
DataMod 2024   12th International Symposium DataMod 2024: From Data to Models and Back