0:00:15 | in q |
---|
0:00:17 | my name is that i have i worked with we marker at the natural language |
---|
0:00:22 | and dialogue systems lab at uses an improved |
---|
0:00:24 | i'm going to talk about |
---|
0:00:26 | learning fine grained knowledge about contingent relations between everyday band |
---|
0:00:32 | or well here in this work is |
---|
0:00:34 | to capture commonsense knowledge |
---|
0:00:36 | about the fine grained events are everyday experience and the events that occur in their |
---|
0:00:42 | everyday life of people |
---|
0:00:43 | like in a like opening every enable |
---|
0:00:47 | preparing food in getting to or |
---|
0:00:49 | an alarm going off |
---|
0:00:52 | triggers waking up and getting out of that |
---|
0:00:56 | you believe that this type of knowledge and the relation between |
---|
0:01:00 | you events is a contingency relation |
---|
0:01:05 | based on the definition of contingency from the penn discourse treebank which has two types |
---|
0:01:10 | that cause and condition |
---|
0:01:13 | another motivation of our work if that much of the user generated content on social |
---|
0:01:17 | media is provided by ordinary people telling stories about their daily lives and stories are |
---|
0:01:24 | reach in common sense |
---|
0:01:27 | knowledge and discontinuing relation between events i have two examples here from our dataset our |
---|
0:01:34 | dataset is drawn from |
---|
0:01:36 | is actually a subset of this you know what's which have millions of blog posts |
---|
0:01:40 | and it contains the personal stories written by people about their daily lives in their |
---|
0:01:45 | block |
---|
0:01:47 | in the examples you can see that are sequences of coherent events in the stories |
---|
0:01:52 | for example the first one |
---|
0:01:54 | it about going on a camping trip |
---|
0:01:57 | and you have a sequence of events that |
---|
0:01:59 | they pack everything they |
---|
0:02:01 | we got in the morning to go get to the camping ground and |
---|
0:02:05 | a place that are a set of the pen so there is you sequence of |
---|
0:02:09 | events in the second story is about witnessing the store |
---|
0:02:12 | the parking |
---|
0:02:13 | make landfall the green blue a tree fell |
---|
0:02:16 | then they people start cleaning up writing of the trees a lot of breaking |
---|
0:02:23 | there is this commonsense knowledge in this contingency relation between events in stories implicitly |
---|
0:02:28 | and you want to learn these and |
---|
0:02:32 | it is showing his talk that use fine-grained knowledge |
---|
0:02:35 | is not found in the previous work |
---|
0:02:38 | on the extraction of |
---|
0:02:41 | narrative events and even collection |
---|
0:02:45 | a much of the previous work |
---|
0:02:47 | is not focus on a particularly the relation between the events they characterise what they |
---|
0:02:53 | learned as a collection no |
---|
0:02:55 | events that tend to co-occur |
---|
0:02:57 | and on they are kind of vague about what is the relation between the sequence |
---|
0:03:01 | of events |
---|
0:03:03 | is also mostly focused on venues wired longer |
---|
0:03:06 | so the type of the knowledge that they can learn |
---|
0:03:09 | is a limited to |
---|
0:03:11 | the new rd events that are on the news articles like the ball mean really |
---|
0:03:15 | more explosions |
---|
0:03:18 | as for evaluation they mostly used an article fast which we believe is not the |
---|
0:03:23 | right way to evaluate used type of knowledge |
---|
0:03:25 | so |
---|
0:03:27 | in our work we focus on contingency relation between events |
---|
0:03:33 | we use the personal blog stories |
---|
0:03:35 | at the dataset |
---|
0:03:37 | so we can learn new types of knowledge about even and other than the newsworthy |
---|
0:03:42 | events |
---|
0:03:44 | and we also used to |
---|
0:03:46 | evaluation metals one of them is inspire and motivated by previous work and the other |
---|
0:03:52 | one is completely new |
---|
0:03:57 | this is that a distortion this dataset or tends to be told in chronological order |
---|
0:04:03 | so there's a temporal order between the events that are told in the story and |
---|
0:04:07 | this is great because temporal order between events is a strong cue to contingency |
---|
0:04:11 | so this makes it us to the whole |
---|
0:04:13 | for our task |
---|
0:04:15 | but these data sitcoms the with its own challenges |
---|
0:04:19 | it has more of an informal structure as compared the news article that are well |
---|
0:04:23 | structured |
---|
0:04:24 | and the structural |
---|
0:04:26 | disorders or more similar to the oral narrative in one of our previous studies apply |
---|
0:04:33 | the oral narrative while of all involves key to |
---|
0:04:36 | this to label that clauses in his personal stories and we show that about only |
---|
0:04:41 | a third of the sentences in the personal narratives |
---|
0:04:43 | you describe the actions and events and |
---|
0:04:46 | the other two is there are talking about the background and the |
---|
0:04:51 | try to distaste the emotional of the narrator |
---|
0:04:55 | i have an example here i'm not going to describe what is labeled are but |
---|
0:04:58 | you can see that there is some background like |
---|
0:05:00 | now on we speak story |
---|
0:05:04 | is that one can take the |
---|
0:05:06 | or a then there is some a actions and events about the person getting rs |
---|
0:05:10 | there |
---|
0:05:11 | by the traffic police and then |
---|
0:05:15 | at like me to my i should go free so it's not all events there |
---|
0:05:19 | is a lot of other things going on which makes it more challenging |
---|
0:05:24 | and |
---|
0:05:25 | so we need not all methods to |
---|
0:05:29 | in very useful relations between events from this dataset and i'm going to show it |
---|
0:05:33 | in the experiments that |
---|
0:05:34 | if we apply the methods that work on the news articles to extract the event |
---|
0:05:39 | collections |
---|
0:05:40 | we won't get good results on this dataset |
---|
0:05:44 | what events we define event at the brier |
---|
0:05:48 | we three arguments the subject the direct object and the particle |
---|
0:05:53 | and some examples |
---|
0:05:55 | definition is motivated by |
---|
0:05:58 | one of the previous work by a hotel and nineteen twenty fourteen that they show |
---|
0:06:03 | that more argument representation argument is richer in it is more capable of capturing the |
---|
0:06:08 | interaction between events |
---|
0:06:11 | they use verb and subject and object we also added the particle |
---|
0:06:16 | because we think that it is also necessary for conveying the right meaning of a |
---|
0:06:21 | bound for example the first and then a stable |
---|
0:06:25 | event it putting out that and |
---|
0:06:27 | and you have put direct object and the particle with all and you can see |
---|
0:06:31 | how all these arguments |
---|
0:06:33 | i can contribute to the meaning of the event like put by itself |
---|
0:06:37 | it has a different meaning that putting up to ten and |
---|
0:06:41 | also the particle put and put up |
---|
0:06:43 | are you tell you different thing |
---|
0:06:45 | and you know or especially it is important because |
---|
0:06:48 | it's more informally had a lot of each verb still it's important to have all |
---|
0:06:52 | the arguments |
---|
0:06:53 | in the event representation |
---|
0:06:54 | or extracting events we use the stamp for dependency parser and |
---|
0:07:00 | use the dependency parse trees to extract the use of verbs and arguments |
---|
0:07:04 | and we also use the sample named entity recognizer |
---|
0:07:07 | to do a little more generalization of the arguments for example the |
---|
0:07:12 | terms of the phrases that refer to location |
---|
0:07:15 | are mapped to their type location the same four percent i'm date and |
---|
0:07:21 | section so |
---|
0:07:23 | the contributions of our work is that we have a data collection step we generate |
---|
0:07:28 | topics sort of personal stories using it would it's tracking algorithm |
---|
0:07:33 | then be directly compare our method for extracting these contingency relation between events on a |
---|
0:07:39 | general domain set of stories and also on the topic specific data that we have |
---|
0:07:45 | generated |
---|
0:07:46 | and we will show that we can learn more fine grained and richer knowledge and |
---|
0:07:50 | more interesting knowledge from the topic specific corpus |
---|
0:07:53 | and a model works |
---|
0:07:55 | significantly better on the topic specific corpus and this is the first time that you're |
---|
0:07:58 | doing this comparison directly on these two types of data set |
---|
0:08:02 | for the event collection |
---|
0:08:04 | and will show that is improvement is possible even with less amount of data on |
---|
0:08:10 | the topic specific corpus |
---|
0:08:13 | we have to use that some experiments we directly compare our work to the most |
---|
0:08:18 | relevant previous work |
---|
0:08:19 | and we also used to |
---|
0:08:21 | evaluation methods for these experiments |
---|
0:08:25 | no the data collection of our we have a some unsupervised algorithm for generating a |
---|
0:08:30 | topic specific dataset using a bootstrapping method the corpus you're is the general the on |
---|
0:08:36 | annotated blocks corpus that has the all the personal blog stories |
---|
0:08:40 | we first manually label a small set at this feature for the bootstrapping |
---|
0:08:45 | about two hundred to three hundred and each topic |
---|
0:08:48 | and that is into a lot of like us we choose a we it |
---|
0:08:55 | after the learner |
---|
0:08:56 | so we generate some event utterance |
---|
0:08:59 | specific to that talking for example if you're looking at the camping trip story |
---|
0:09:03 | we can generate some pattern like this like and p followed by proposition followed by |
---|
0:09:07 | and optional and p and the head of the first noun phrases counting the recognition |
---|
0:09:13 | and so |
---|
0:09:14 | it generates some even tyrants that or |
---|
0:09:16 | strongly correlated with the topic |
---|
0:09:19 | and then we use these patterns to lose track and label automatically label or stories |
---|
0:09:25 | on the topic from the |
---|
0:09:27 | corpus so |
---|
0:09:28 | then we fading the on labeled data |
---|
0:09:31 | java slot and we use this patterns |
---|
0:09:34 | and based on how much of the patterns of a topic you can find in |
---|
0:09:38 | an unlabeled data |
---|
0:09:39 | in the label each other |
---|
0:09:42 | that topic so |
---|
0:09:43 | we do about two to three hundred and two topics we generated about with one |
---|
0:09:48 | around a bootstrapping you generated about one problem |
---|
0:09:51 | new label with a bootstrapping |
---|
0:09:55 | and here i'm presenting the results on two |
---|
0:09:58 | topics |
---|
0:09:59 | from our corpus the counting story |
---|
0:10:01 | and stories about witnessing image or store |
---|
0:10:06 | a bit about three hundred stories we generated the expanded the corpus about al |
---|
0:10:13 | or learning called a contingency relation between events we use how the potential method introduced |
---|
0:10:20 | by anymore in your june two thousand nine |
---|
0:10:23 | it's an unsupervised distributional measured it measures that |
---|
0:10:26 | and then the of an event pair to encode a cause relation |
---|
0:10:30 | you know on it apparently when have a high like cause a potential all swore |
---|
0:10:35 | they have a higher probability of occurring in |
---|
0:10:40 | the causal context |
---|
0:10:42 | so the first component your is the mutual information in the second one |
---|
0:10:46 | it is taking into account the temporal order between the advanced so |
---|
0:10:52 | if they can talk or more in this order this particular ordered a we have |
---|
0:10:57 | a higher recall the potential score |
---|
0:10:59 | and this is |
---|
0:11:01 | great for our corpus because the events are the events tend to be told in |
---|
0:11:05 | the right sample order |
---|
0:11:08 | then we calculate a called a potential |
---|
0:11:10 | for every pair of you understand events in the corpus |
---|
0:11:14 | a using escape to bigram model |
---|
0:11:18 | because like i shown in example |
---|
0:11:21 | all the sentences or not |
---|
0:11:24 | events and events can be interrupted by the non events and that the we use |
---|
0:11:27 | this you to bigram |
---|
0:11:29 | which defines two events to be ideas and if they are really in two or |
---|
0:11:33 | less the events from each other |
---|
0:11:38 | most of the previous work use this narrative closed test |
---|
0:11:42 | for evaluating their |
---|
0:11:44 | sequence of events that they have learned |
---|
0:11:47 | now suppose that a sequence of narrative events in a document from which one event |
---|
0:11:52 | has been removed |
---|
0:11:53 | and the task is to predict the remove event |
---|
0:11:57 | no we believe that this is not |
---|
0:12:00 | suitable for our task for evaluating the coherence of events |
---|
0:12:04 | and also in the previous work by each other and we they show that unigram |
---|
0:12:08 | model results are nearly as good as or more complicated more sophisticated models |
---|
0:12:14 | on this task so it's |
---|
0:12:16 | not good for a capturing the |
---|
0:12:19 | all the capabilities of the model |
---|
0:12:22 | no we are proposing in new evaluation model which is motivated by cope all corpora |
---|
0:12:30 | wasn't evaluation method for the common sense causal reasoning it had to choice questions |
---|
0:12:37 | so |
---|
0:12:38 | we are generating are automatically generating used to choice questions from a task that we |
---|
0:12:42 | have a separate held-out test set for each dataset |
---|
0:12:45 | and |
---|
0:12:46 | it would choice question |
---|
0:12:49 | consist of one event question |
---|
0:12:51 | event question for example of your a range outdoor |
---|
0:12:54 | is extracted from the test that still it occurs in the test that |
---|
0:12:58 | and one of the choices which is the correct answer |
---|
0:13:01 | is the event that is followed by it is falling the a range so it's |
---|
0:13:06 | falling the event question |
---|
0:13:08 | unlike the task that and the second one |
---|
0:13:11 | it is not the correct answer is randomly generated from the list of all events |
---|
0:13:15 | that have that |
---|
0:13:16 | so if you're in that have a range outdoor followed by whole tray |
---|
0:13:20 | and call it is randomly generated |
---|
0:13:23 | so the model is supposed to predict which one |
---|
0:13:26 | of these two choices is more likely to have a contingency relation with the event |
---|
0:13:31 | in the question and then we calculate the accuracy based on the answers that the |
---|
0:13:35 | model generates your |
---|
0:13:39 | in previous work we compared to indirectly it |
---|
0:13:44 | the work by but is the remaining in grounding thirteen |
---|
0:13:48 | they generate something that they call the realm gram toppled it basically a pair of |
---|
0:13:53 | relational toppled of events so they generate these pairs of events |
---|
0:13:58 | that tend to |
---|
0:13:59 | collector together |
---|
0:14:01 | they use the news article |
---|
0:14:04 | there is that |
---|
0:14:06 | use the co-occurrence statistics based on symmetric conditional probability |
---|
0:14:11 | which is here and the cp |
---|
0:14:12 | so we basically just combines the bigram model in two directions |
---|
0:14:17 | and on their corpus their demands that they have learned is publicly available you can |
---|
0:14:23 | access to |
---|
0:14:24 | the run online search interface |
---|
0:14:27 | and they show that in the remote that they outperform the previous work on the |
---|
0:14:30 | thing |
---|
0:14:31 | talking on learning the narrative events |
---|
0:14:34 | big and i two experiments to compare these previous work |
---|
0:14:38 | e compare the content of what hitler |
---|
0:14:41 | to show that would be learned not exist in the previous collections and we also |
---|
0:14:44 | applied or model on our dataset to show that the model that more on the |
---|
0:14:49 | more structured data like news article cannot get good results on how to do that |
---|
0:14:56 | of the baseline we use the unigram model which basically is the distribution of the |
---|
0:15:02 | prior probability of the events we use the bigram which is the bigram probability of |
---|
0:15:06 | the event pair |
---|
0:15:08 | again using the script a bigram model and the event a cp the symmetric conditional |
---|
0:15:12 | probability from the real grams work |
---|
0:15:16 | and i mean method here is that all the potential so |
---|
0:15:21 | we have two dataset the general domain stories dataset are the stories are randomly selected |
---|
0:15:27 | from the corpus they don't have a specific theme or topic |
---|
0:15:33 | we have four thousand stories in the training and two hundred stories in the held |
---|
0:15:37 | out test set |
---|
0:15:38 | we also have a topic specific that is that your time |
---|
0:15:41 | i will be presenting the results on two topics the camping stories and stories about |
---|
0:15:46 | witnessing the score |
---|
0:15:48 | here's displayed other dataset for each of topic so we had a hand labeled c |
---|
0:15:53 | we split into test and training so we have the hand labeled cast you have |
---|
0:15:59 | the hand labeled training and then we create for each topic |
---|
0:16:02 | a larger training set that has the hand labeled training |
---|
0:16:05 | last the bootstrap data to see if the blue strapping is helpful at all or |
---|
0:16:10 | not |
---|
0:16:14 | here is the results this is the accuracy on a all the task was true |
---|
0:16:18 | for each topic |
---|
0:16:20 | though |
---|
0:16:22 | i'm reporting the results of the baselines on the largest rings that each other hand |
---|
0:16:28 | label and it with a strap because |
---|
0:16:31 | and the hand labeled the results are |
---|
0:16:33 | just a little worse so i'm just a reporting the |
---|
0:16:37 | best results for the baseline |
---|
0:16:39 | and then for causal potentially all have the results for both the hand labeled train |
---|
0:16:44 | set mutual small about |
---|
0:16:45 | one or two hundred and |
---|
0:16:47 | on the largest trends that about the problem |
---|
0:16:51 | which is the hand labeled plus the word wrap |
---|
0:16:55 | it up |
---|
0:16:56 | here you can see that the other potential results are significantly stronger than |
---|
0:17:01 | all gutter baseline |
---|
0:17:05 | and also the results on the topic specific dataset is significantly stronger |
---|
0:17:10 | on the results on the general domain even for the call the controls about |
---|
0:17:15 | accuracy is pointing by one but on the topic specific on the |
---|
0:17:21 | even a smaller dataset |
---|
0:17:24 | you can |
---|
0:17:25 | and you sixty eight percent accuracy for the for one topic and for another about |
---|
0:17:29 | eighty eight percent accuracy |
---|
0:17:32 | and also if you compare the results on the smaller hand-labeled train set to the |
---|
0:17:37 | training set with the worst wrapping which is larger |
---|
0:17:41 | they consider more training data collected by bootstraping can improve the results that was tracking |
---|
0:17:47 | was actually effective |
---|
0:17:52 | no |
---|
0:17:55 | event the cp or the bigram models that |
---|
0:17:57 | were used in the previous work for generating these events a collection |
---|
0:18:02 | did not work very well on our dataset |
---|
0:18:09 | the next thing it is we want to compare the content of what we have |
---|
0:18:13 | them and see if we actually exist in the previous collections are not so you're |
---|
0:18:17 | i want to show the results of comparing the event that we expected from |
---|
0:18:22 | the camping trip story |
---|
0:18:26 | against the realm i'm tuples |
---|
0:18:27 | so the real grounds are not topics or that |
---|
0:18:30 | so what we did to get the ones that are related to the campaign is |
---|
0:18:34 | that we use our top then even tighter that are generated in the blues tracking |
---|
0:18:41 | process |
---|
0:18:42 | and we use them to search the interface so |
---|
0:18:47 | each event pair in the |
---|
0:18:50 | like this example |
---|
0:18:52 | for example go camping is one of the even patterns that we have and then |
---|
0:18:57 | research it in the interface and then we get all the pairs that |
---|
0:19:03 | there at least one of the event is |
---|
0:19:05 | go camping |
---|
0:19:09 | so then apply filtering and ranking that was used in the same paper they filter |
---|
0:19:14 | based on the frequency at rank based on the symmetric conditional probability metric that they |
---|
0:19:19 | had |
---|
0:19:19 | and then evaluate the top and in four hundred |
---|
0:19:23 | on our next evaluation task that a jury next |
---|
0:19:28 | and this is some examples of the |
---|
0:19:30 | there are extracted for the counting from real ground |
---|
0:19:33 | in you can see that if you look at the second |
---|
0:19:36 | events in his parents |
---|
0:19:38 | person a likeable camping and then work we first then we'll reorganisation be direct organisation |
---|
0:19:43 | lose percent so it seems that this is not about the chanting tree like to |
---|
0:19:47 | find camping three it's about |
---|
0:19:50 | mostly the eight moves or the refugee |
---|
0:19:54 | and |
---|
0:19:55 | so we propose a new evaluation method on |
---|
0:20:00 | under the mechanical turk |
---|
0:20:02 | for evaluating the topic specific contingent event pair |
---|
0:20:05 | so we evaluate the cars based on their topic relevant and contingency relation |
---|
0:20:10 | we asked the annotators to |
---|
0:20:13 | rates the pairs on a scale of zero two three zero is the events are |
---|
0:20:17 | not contingent to one |
---|
0:20:19 | events are contingent but not relevant to talk to their can you know but somewhat |
---|
0:20:23 | relevant topic and three's the strongest the events are doing and stronger about the topic |
---|
0:20:28 | in to make the |
---|
0:20:30 | even under presentation more readable for annotators be |
---|
0:20:34 | not receive and representation to subject verb particle |
---|
0:20:38 | and |
---|
0:20:40 | direct object |
---|
0:20:41 | so |
---|
0:20:42 | i subject percent or topic or particle all will be mapped to person tackle part |
---|
0:20:47 | which is more readable to the user's |
---|
0:20:49 | and this is the result |
---|
0:20:52 | what the runtime evaluation |
---|
0:20:54 | so |
---|
0:20:56 | only seven percent are judged to be continued and topic relevant and we think this |
---|
0:21:00 | is because |
---|
0:21:01 | that the camping trip a big actually does not exist in the collection |
---|
0:21:05 | an overall only forty two percent are just to be continue |
---|
0:21:11 | we evaluate our topic specific content event pairs in the same big help to clustering |
---|
0:21:15 | method selecting |
---|
0:21:17 | the pairs that are |
---|
0:21:19 | more than five times frequent filtering by the same event utterance |
---|
0:21:24 | and the ranking by call control model |
---|
0:21:26 | on the same clustering and ranking method and evaluating the top hundred for each |
---|
0:21:30 | a topic and this is there a result |
---|
0:21:33 | that is showing that |
---|
0:21:35 | for each topic for the camping forty four percent and for the store |
---|
0:21:39 | thirty three percent |
---|
0:21:41 | are contingent and type of nn |
---|
0:21:43 | and overall about eighty percent of the |
---|
0:21:46 | all the pairs that the other and are contingent and |
---|
0:21:50 | on average inter annotator |
---|
0:21:51 | reliability on these mechanical turk task was |
---|
0:21:55 | point seven three we show substantial agreement so |
---|
0:22:02 | finally i want to show some examples of the event first |
---|
0:22:08 | no |
---|
0:22:09 | we show that the results on the topic specific are stronger and even by looking |
---|
0:22:14 | at the examples you can see that the knowledge that we learn is more interesting |
---|
0:22:18 | like climb find a rock or we include transformer power was a three policies crush |
---|
0:22:24 | relocation exactly person but |
---|
0:22:27 | the ones from the general domain data set or more general like person who were |
---|
0:22:31 | down trail or persons you can't person simply use or personable locked on |
---|
0:22:37 | and |
---|
0:22:38 | conclusion |
---|
0:22:42 | so we learn new type of knowledge at the current knowledge round every event that |
---|
0:22:47 | is not available in the a previous work on the news wire genre |
---|
0:22:51 | you have a collection is that of uses a supervised model to with this relevant |
---|
0:22:56 | greatest topic specific data and the first work that the |
---|
0:22:59 | compared the results on the topic specific words that the general domain the story |
---|
0:23:04 | have to new evaluation method one of them completely new on mechanical turk and the |
---|
0:23:08 | other one inspired by the core task |
---|
0:23:11 | and the results |
---|
0:23:15 | i have already talked about this and by doing things |
---|
0:23:19 | and you |
---|
0:23:59 | i think that's true so if you have the dataset that is specific to even |
---|
0:24:03 | it's easier |
---|
0:24:04 | to learn and it's |
---|
0:24:05 | the methods will be more effective |
---|
0:24:20 | that is definitely an interesting idea i have tried to war to make model |
---|
0:24:24 | on the corpus but the results didn't look a good |
---|
0:24:31 | like that the ones that are considered similar or when i look at then they |
---|
0:24:34 | are actually not similar for our task |
---|
0:25:13 | the labeling is only for the for the |
---|
0:25:21 | the only thing is only here |
---|
0:25:25 | or the |
---|
0:25:27 | stories not the event types |
---|
0:25:29 | right so the event handlers are generated automatically by the awful like |
---|
0:25:36 | you just need to find some topics like |
---|
0:25:38 | come up with the topics like you think that okay what people right on the |
---|
0:25:41 | blocks |
---|
0:25:43 | four minutes and store and then you go you look at the corpus you try |
---|
0:25:47 | to find a small c |
---|
0:25:48 | a small set of stories that are undoubtedly |
---|
0:25:59 | so what it initially was running a topic modeling is |
---|
0:26:04 | but it topics that are generated or non-coherent but they help you get some why |
---|
0:26:09 | doesn't what topics exist here so you know that you can go look for the |
---|
0:26:13 | stories about going |
---|
0:26:15 | you like whoa |
---|
0:26:17 | or going on a camping trip |
---|
0:26:21 | but once you come up with the topics that actually this i think you can |
---|
0:26:25 | expand this and then be more and more rounds of what is tracking you can |
---|
0:26:28 | collect more data |
---|