Template and guidelines for submission of the working notes in the eHealth-KD Challenge @ IberLEF 2020. Based on Springer LLNCS templates.
% This is samplepaper.tex, a sample chapter demonstrating the
% LLNCS macro package for Springer Computer Science proceedings;
% Version 2.20 of 2017/10/04
%
\documentclass[]{llncs}
%
\usepackage{graphicx}
% Used for displaying a sample figure. If possible, figure files should
% be included in EPS format.
%
% If you use the hyperref package, please uncomment the following line
% to display URLs in blue roman font according to Springer's eBook style:
% \renewcommand\UrlFont{\color{blue}\rmfamily}
\begin{document}
%
\title{\{Team Name\} at eHealth-KD 2020}
\subtitle{\{Long Title\}}
%
%\titlerunning{Abbreviated paper title}
% If the paper title is too long for the running head, you can set
% an abbreviated paper title here
%
\author{First Author\inst{1}\orcidID{0000-1111-2222-3333} \and
Second Author\inst{2,3}\orcidID{1111-2222-3333-4444} \and
Third Author\inst{3}\orcidID{2222--3333-4444-5555}}
%
% First names are abbreviated in the running head.
% If there are more than two authors, 'et al.' is used.
%
\institute{Princeton University, Princeton NJ 08544, USA \and
Springer Heidelberg, Tiergartenstr. 17, 69121 Heidelberg, Germany
\email{lncs@springer.com}\\
\url{http://www.springer.com/gp/computer-science/lncs} \and
ABC Institute, Rupert-Karls-University Heidelberg, Heidelberg, Germany\\
\email{\{abc,lncs\}@uni-heidelberg.de}}
%
\maketitle % typeset the header of the contribution
%
\begin{abstract}
The abstract should briefly summarize the contents of the paper in
15--250 words.
\keywords{First keyword \and Second keyword \and Another keyword.}
\end{abstract}
%
%
%
\section*{General Details}
\begin{itemize}
\item Articles must be written in English.
\item The minimum length of the paper should be 5 (mandatory minimum) and up to 10 pages plus references.
\item Please respect the title format, where \textit{Team Name} is the officially published team identifier in the eHealth-KD 2020 website, and use \textit{Long Title} for additional details.
Contact the challenge organizers for any changes to the team names.
\end{itemize}
\textbf{NOTE: This section is not meant to be part of the final version of your paper.}
\section{Introduction}
Write down a general overview of your system. This might include:
\begin{itemize}
\item Motivation for choosing the selected architectures in the context of the challenge.
\item Citations to any external resources or strategies used by your system.
\end{itemize}
Do not focus on describing the task and/or the corpus. Instead, include a citation to the Overview paper, and at most provide a very short
introduction to the challenge if you consider it relevant.
A preliminary citation is provided in this template~\cite{overview}, which will be updated
in due time.
\section{System Description}
Describe the architecture of your system in a concise and precise manner, such that other participants might be able to reproduce your work.
Make sure to include the following information:
\begin{itemize}
\item Your system's architecture.
\begin{itemize}
\item For example, if your system is based on deep learning techniques, mention the corresponding layers and other components.
\item Hyperparameters (e.g., layer sizes, dropout rates).
\end{itemize}
\item Input handling.
\begin{itemize}
\item Sentence tokenization.
\item Token representation and encoding.
\end{itemize}
\item Output handling.
\begin{itemize}
\item How does the entity extraction task translate to your system (e.g., sequence labeling problem, takes entities overlaps into consideration)?
\item How does the relation extraction task translate to your system (e.g, pairwise queries, focus on a single entity)?
\end{itemize}
\item System training.
\begin{itemize}
\item In what infrastructure your system was trained?
\item Which collections (i.e. train, dev, ensembled corpora) of the dataset you used for training, validation, etc.?
\end{itemize}
\end{itemize}
Consider the following questions while writing your working notes:
\begin{itemize}
\item What is the general approach your system fits best into?
\begin{itemize}
\item Deep learning based?
\item Classical ML algorithms?
\item Handcrafted rules?
\item NLP features?
\item Other?
\end{itemize}
\item Does your system solves task A and B jointly? Or with several independent models? Do they share any layers?
\item Does your system use pretrained word embeddings? Custom ones?
\begin{itemize}
\item Which one?
\item Trained on a general domain corpora? Medicine related?
\end{itemize}
\item Does your system use pretrained contextual embeddings such as BERT?
\begin{itemize}
\item Which one?
\item How you incorporate it? Fine-tuning? Pre-computed features?
\end{itemize}
\item Does your system used additional syntactic features?
\begin{itemize}
\item Which ones?
\item POS-tag information?
\item Dependency parsing information?
\item char level representations?
\end{itemize}
\item Does your system make use of the additional 3000 automatically annotated sentences from Medline that were provided for further training?
\begin{itemize}
\item Which ones of them were used? All of them? Only the ones with the highest agreement?
\end{itemize}
\item Does your system extend the training data available with any other extra resources?
\begin{itemize}
\item Which one?
\item Does it use sentences from the first edition of the competition?
\end{itemize}
\item Does your system use any other type of external knowledge?
\begin{itemize}
\item Which one?
\item How do you think it contributes to your system?
\end{itemize}
\item Does your system applies attention-based techniques to solve any task?
\begin{itemize}
\item Which one?
\item How?
\end{itemize}
\item Does your system uses any strategy for additional performance boosting, such as ensemble methods?
\begin{itemize}
\item Which one?
\item What are the relevant parameters and overall details?
\end{itemize}
\item Do you perform any type of hyperparameter tuning or architecture search?
\begin{itemize}
\item Do you use an external tool (e.g., \textit{AutoSklearn}, \textit{AutoKeras}), or a
custom solution?
\item How did you split the training data for cross-validation purposes?
\item What are the relevant parameters, execution time, resources, etc.?
\end{itemize}
\item About the transfer learning evaluation scenario (scenario 4), does your system take any additional considerations into account?
\begin{itemize}
\item Which ones?
\item Were they taken into account the same way for all scenarios? Or does a particular data or architecture was used to solve each particular scenario?
\end{itemize}
\end{itemize}
The purpose of these questions is to highlight the kind
of details were believe the readers will be most interested.
Do not take these questions literally nor organize the section as an explicit answer
to these items, but rather use them to guide your overall system
description, while organizing the text in a coherent narrative.
If the answer to any of the previous questions is simply ``no'', or it is completely unrelated
to your approach, then do not mention those elements.
Likewise, include any additional details that you consider relevant for understanding and
contextualizing your contributions.
\section{Results}
Report the performance achieved by your system in each run and scenario as officially published.
If your team developed one or more systems that were not submitted to the challenge, feel free to include them in this section, but always noticing that there were not part of the officially evaluated runs.
Include any tables and figures that you consider relevant.
You can use any officially announced evaluation statistics to compare your results with the ones of other participants, and you can design and discuss other comparison metrics to address the
issues you consider most relevant with respect to your system.
\section{Discussion}
Though not mandatory, we encourage you to discuss the main insights that can be derived from the performance of each one of your runs. You can include additional experimentation, analysis of the impact of hyperparameters, analysis of feature relevance, etc.
Remember that the most important results of this challenge are not the F1 metrics per-se, but rather
any interesting findings and insights that help advance the state-of-the-art.
\section{Conclusions}
Share your final conclusions on your systems and any future work recommendations.
\bibliographystyle{splncs04}
\bibliography{bibliography}
\end{document}