subtitles/en/tasks_01_🤗-tasks-question-answering.srt (70 lines of code) (raw):
1
00:00:04,400 --> 00:00:06,480
Welcome to the Hugging Face tasks series.
2
00:00:07,200 --> 00:00:10,080
In this video, we will take a look
at the Question Answering task.
3
00:00:13,120 --> 00:00:17,200
Question answering is the task of
extracting an answer in a given document.
4
00:00:21,120 --> 00:00:25,600
Question answering models take a context,
which is the document you want to search in,
5
00:00:26,240 --> 00:00:31,440
and a question and return an answer.
Note that the answer is not generated,
6
00:00:31,440 --> 00:00:37,600
but extracted from the context. This type
of question answering is called extractive.
7
00:00:42,320 --> 00:00:46,960
The task is evaluated on two
metrics, exact match and F1-Score.
8
00:00:49,680 --> 00:00:52,320
As the name implies, exact match looks for an
9
00:00:52,320 --> 00:00:57,840
exact match between the predicted
answer and the correct answer.
10
00:01:00,080 --> 00:01:05,520
A common metric used is the F1-Score, which
is calculated over tokens that are predicted
11
00:01:05,520 --> 00:01:10,960
correctly and incorrectly. It is calculated
over the average of two metrics called
12
00:01:10,960 --> 00:01:16,560
precision and recall which are metrics that
are used widely in classification problems.
13
00:01:20,880 --> 00:01:28,240
An example dataset used for this task is called
SQuAD. This dataset contains contexts, questions
14
00:01:28,240 --> 00:01:32,080
and the answers that are obtained
from English Wikipedia articles.
15
00:01:35,440 --> 00:01:39,520
You can use question answering models to
automatically answer the questions asked
16
00:01:39,520 --> 00:01:46,480
by your customers. You simply need a document
containing information about your business
17
00:01:47,200 --> 00:01:53,840
and query through that document with
the questions asked by your customers.
18
00:01:55,680 --> 00:02:06,160
For more information about the Question Answering
task, check out the Hugging Face course.