0:00:15 | this is the work of my phd student law may write him or not who |
---|
0:00:20 | it's from here on |
---|
0:00:21 | incamera currently leave the united states |
---|
0:00:24 | so i |
---|
0:00:26 | presenting our work with your and she's finishing phd not very good situation |
---|
0:00:33 | alright |
---|
0:00:34 | so our |
---|
0:00:35 | and the motivation for this work is that |
---|
0:00:38 | a narrative structures occur all over different kinds of natural language genres you see "'em" |
---|
0:00:43 | in restaurant reviews you see it in |
---|
0:00:46 | in newspapers |
---|
0:00:47 | and this seems to be because humans |
---|
0:00:50 | the way that they work advice |
---|
0:00:52 | the world is in terms of narrative structure so this kind of fits in very |
---|
0:00:56 | well with the added tailors |
---|
0:00:57 | i talked this morning that people are always trying to look for coherence a lot |
---|
0:01:01 | of that coherence can kind of be framed as a narrative structure |
---|
0:01:06 | finally agree |
---|
0:01:07 | that narrative understanding requires modeling the goals of the protagonist in tracking the outcomes of |
---|
0:01:13 | these goals whether the goals are being fulfilled or not |
---|
0:01:16 | or thwarted |
---|
0:01:17 | and first person |
---|
0:01:20 | social media stories are actually full of these expressions of desires and outcome |
---|
0:01:25 | descriptions of for example here's something from a log site like journal where it's very |
---|
0:01:30 | similar to like most of where our data comes from |
---|
0:01:34 | so somebody's telling a story something that happens at a at a concert |
---|
0:01:38 | slight drop something it was dark about the cellphone hardly look for we spoke a |
---|
0:01:42 | little bit it was loud and so can really talk |
---|
0:01:45 | i had hoped to |
---|
0:01:47 | asking to jointly forgery first subpoena to the shower likert with alarming to do such |
---|
0:01:51 | thing |
---|
0:01:52 | but he left before the and alliance em after that maybe outright missed connections |
---|
0:01:56 | so this |
---|
0:01:58 | sentence here i had hoped to ask him to jointly for a drink or something |
---|
0:02:01 | i shows an expression |
---|
0:02:03 | of a first person desire and one of the reasons that were interested in first |
---|
0:02:07 | person stories is because |
---|
0:02:10 | we don't have to deal with co reference it's quite easy to try to the |
---|
0:02:14 | protagonist is in the first person |
---|
0:02:17 | narrative so we can kind of tracker |
---|
0:02:20 | the near the protagonist goals in this case |
---|
0:02:24 | which makes the problem of a little bit more tractable |
---|
0:02:28 | so what we do is we identify goal and desire expressions in |
---|
0:02:32 | first person narratives like for example that had hoped to in the previous we have |
---|
0:02:36 | a bunch more all |
---|
0:02:37 | tell you more about how we get an |
---|
0:02:39 | and then we well we aim to do is to infer from the surrounding texts |
---|
0:02:43 | whether or not the desires fulfilled or it's not fulfilled so we want to actually |
---|
0:02:48 | we the narrative and be able to predict whether the desire |
---|
0:02:53 | is fulfilled or not |
---|
0:02:56 | so in this particular case the one i showed you |
---|
0:02:59 | we have this a phrase but he left for the and i didn't seem after |
---|
0:03:02 | that which clearly indicate that the desire was unfulfilled |
---|
0:03:08 | and |
---|
0:03:09 | as i said in this kind corpus that we have so we have a corpus |
---|
0:03:12 | of about nine hundred thousand |
---|
0:03:15 | first person |
---|
0:03:17 | stories from a blogs domain |
---|
0:03:21 | these |
---|
0:03:22 | excuse the for you know if i try to do that the practise |
---|
0:03:27 | so |
---|
0:03:28 | i this was a slight a lot i didn't have a mare |
---|
0:03:32 | but there's the these first person narrative are just right |
---|
0:03:36 | with these desire expressions you can get as many as you want as you could |
---|
0:03:40 | possibly want out of out of this kind of |
---|
0:03:43 | data and they have lots and lots of different forms like i wanted to i |
---|
0:03:47 | wish to i decided to i couldn't wait to |
---|
0:03:50 | i aim to i arranged to and i and i needed to |
---|
0:03:53 | and this paper we it's true that states can also expressed desires like |
---|
0:03:59 | if you say something like i'm hungry |
---|
0:04:02 | that implies that you have a desire to get something to e |
---|
0:04:05 | so we initially we had a goal that you we might be able to do |
---|
0:04:08 | something mistakes but we decide in this paper to restrict ourselves to |
---|
0:04:12 | to particular verbs |
---|
0:04:15 | and have tense |
---|
0:04:16 | expressions |
---|
0:04:18 | so |
---|
0:04:20 | okay |
---|
0:04:21 | so the related work the previous work was |
---|
0:04:24 | in a around twenty ten was the first paper on this by ellen trial often |
---|
0:04:28 | hers to deny me well who were trying to implement a computational model of twenty |
---|
0:04:33 | lenders |
---|
0:04:35 | plot units for story understanding |
---|
0:04:37 | and one of the main things that you do in that model plot units is |
---|
0:04:42 | that you try to |
---|
0:04:43 | tracking identify the |
---|
0:04:45 | states of the of the characters |
---|
0:04:47 | the dataset they use with say stops fables |
---|
0:04:50 | and they manually annotated a stops tables themselves to examine the different types of aspect |
---|
0:04:55 | expressions and narratives and one of the things that they claim of this paper is |
---|
0:04:59 | really interesting paper you have read it one of things that they claim is that |
---|
0:05:03 | i states are not expressed |
---|
0:05:07 | explicitly like i was |
---|
0:05:08 | the character we saturday character was happy but that they're implicated or you data right |
---|
0:05:14 | the inferences |
---|
0:05:15 | by the tracking the character schools and they claimed in this |
---|
0:05:19 | and seminal paper that we need even though it's been a long time ai idea |
---|
0:05:24 | that what you wanted to extract people's intentions and whether they're being realised or not |
---|
0:05:29 | that in natural language processing we need to do much more work |
---|
0:05:33 | on tracking a goals and |
---|
0:05:35 | and their outcomes |
---|
0:05:38 | is there is also a recent paper by selecting chatter of at |
---|
0:05:42 | where she kind of picks up on this idea tracking expressions of desire and their |
---|
0:05:47 | outcomes |
---|
0:05:48 | and they did this in a two very different corpora from cars they to date |
---|
0:05:53 | and m c test which is the corpus from microsoft |
---|
0:05:56 | of crowd sourced stories that are suitable |
---|
0:05:59 | for machine meeting a task and the stories are supposed to be understandable by seven |
---|
0:06:04 | year olds so you get to start expressions like |
---|
0:06:07 | johnny wanted to be on the baseball scene |
---|
0:06:10 | he went to the parking practised everyday you know so |
---|
0:06:14 | that kind of story and then they also to passages from wikipedia |
---|
0:06:20 | and tracked desires there and then you and you get stuff like lenin wanted to |
---|
0:06:24 | be varied in moscow |
---|
0:06:26 | but blah so they they're very different than ours than when i first heard a |
---|
0:06:32 | presentation of this paper |
---|
0:06:34 | i thought that aren't at it so much more suitable to this task data we'd |
---|
0:06:39 | already been working on for several years |
---|
0:06:41 | a narrative understanding that our data is so much suitable and so much |
---|
0:06:45 | so prime with this particular task that we had to try it on the our |
---|
0:06:50 | datasets |
---|
0:06:53 | so |
---|
0:06:54 | so we made a new corpus which is publicly available you can download it from |
---|
0:06:58 | our corpus page |
---|
0:07:02 | we have three thousand five hundred it's really high quality corpus where i'm super excited |
---|
0:07:06 | about it being able to do more stuff with that |
---|
0:07:09 | with three thousand five hundred first person informal narratives with the annotations you can download |
---|
0:07:14 | it |
---|
0:07:15 | and we |
---|
0:07:16 | and in this paper might talk about how we model the goals and desires and |
---|
0:07:20 | they're gonna talk about some classification models that we've done i don't know why my |
---|
0:07:24 | slides going on the bottom |
---|
0:07:26 | thing there |
---|
0:07:28 | but |
---|
0:07:30 | what do we do a feature analysis of what features are actually good for predicting |
---|
0:07:34 | the fulfilment outcome |
---|
0:07:35 | we look at the effect of both the prior context in the pos context and |
---|
0:07:39 | i don't even know what that last thing is |
---|
0:07:41 | on there that we do |
---|
0:07:46 | this is gonna be a problem |
---|
0:07:50 | i hope it'll be |
---|
0:07:52 | can i x |
---|
0:07:56 | then by slides are going on the bottom of the |
---|
0:08:05 | ok starting a subset of the spinner corpus which is publicly available corpus of social |
---|
0:08:11 | media from |
---|
0:08:13 | collected at one of the i c w s and task |
---|
0:08:16 | and we restrict ourselves a subset of the spinner corpus that comes from these traditional |
---|
0:08:22 | kind of journalling side like journal by a |
---|
0:08:25 | so you can get quite clean data by restricting your style |
---|
0:08:30 | two |
---|
0:08:31 | things from the spinner corpus that just come from particular blogs websites |
---|
0:08:38 | should we use the power |
---|
0:08:42 | and you |
---|
0:08:44 | i it works fine on my |
---|
0:08:46 | you know |
---|
0:08:48 | for |
---|
0:08:52 | i guess |
---|
0:08:54 | it is not one can to right now |
---|
0:08:59 | we can continue |
---|
0:09:02 | and you think the pdf would be better |
---|
0:09:05 | no you can see okay alright |
---|
0:09:09 | okay |
---|
0:09:19 | okay |
---|
0:09:21 | alright |
---|
0:09:22 | so we have a subset of the spinner corpus |
---|
0:09:26 | we have this like what we claim is a very systematic method linguistically motivated method |
---|
0:09:31 | to identify just a wrinkle statements we collect the context before the goal statements in |
---|
0:09:36 | the context after five up to five utterances perform five utterances after |
---|
0:09:42 | and then we have we have mechanical turk task where we put it out of |
---|
0:09:46 | mechanical turk and we collected a gold standard labels for the fulfilment |
---|
0:09:51 | status to be i actually also ask the turkers to mac |
---|
0:09:54 | to mark what the spans of text |
---|
0:09:56 | we're for evidence for fulfilment are not for film that |
---|
0:09:59 | but it is |
---|
0:09:59 | paper we don't do anything with the evidence |
---|
0:10:04 | okay so i kind of refer to this before the many different linguistic waste expressed |
---|
0:10:09 | desires and so one of the things that my colleague phenomenon was struck by the |
---|
0:10:13 | prior work with that it was the limited in terms of the desire expressions that |
---|
0:10:17 | they looked at they just looked at hope to which two and wanted to |
---|
0:10:21 | i think that's motivated probably by the fact that the and c test corpus is |
---|
0:10:25 | very simple and written for |
---|
0:10:27 | seven year olds and it was crowd sourced of maybe they didn't have very many |
---|
0:10:31 | expressions of desire in there |
---|
0:10:33 | but our data is open-domain is very rich we have complex sentences complex |
---|
0:10:39 | temporal expressions we have all kinds of really great stuff for it on there are |
---|
0:10:43 | really encourage you to have a look at the data |
---|
0:10:47 | so what are not it was he went through framenet and picked every kind of |
---|
0:10:52 | frame really thought could possibly have a verb in it that would or state it |
---|
0:10:56 | would express the desire expression |
---|
0:10:58 | we went through we made a big list of all those then we looked at |
---|
0:11:01 | their frequency in the gigaword corpus to see which things a kind of most frequent |
---|
0:11:05 | english language not just dinars |
---|
0:11:08 | we pick a thirty seven verbs we constructed their past tense |
---|
0:11:12 | for patterns with regular expressions |
---|
0:11:15 | and then we |
---|
0:11:17 | and then we put those out against arg database of nine hundred thousand first person |
---|
0:11:21 | stories |
---|
0:11:22 | and we found six hundred thousand stories that contain verbal patterns of desire |
---|
0:11:28 | so this is kind of what it looks like we go five sentences before and |
---|
0:11:31 | five senses after the reason that we go five sentences before is that |
---|
0:11:36 | there is a oral narrative claim that the structure of narrative that you often for |
---|
0:11:41 | chateau |
---|
0:11:42 | something that's gonna happen so unlike the previous work we took the prior context |
---|
0:11:47 | so we have the prior contact the desired expression and the |
---|
0:11:51 | and the pos context in our goal is to use the context around the desired |
---|
0:11:54 | suppression to try to predict whether |
---|
0:11:58 | the |
---|
0:11:59 | to express desires that |
---|
0:12:02 | so we sampled from a corpus according to a skewed distribution that match the whole |
---|
0:12:08 | original corpus we put three thousand six hundred eighty samples out for annotation |
---|
0:12:12 | exhibiting sixteen verbal pattern |
---|
0:12:15 | and we show the mechanical turkers |
---|
0:12:19 | what the desire expression was that they were supposed to match "'cause" sometimes and story |
---|
0:12:23 | might have more than one |
---|
0:12:25 | in it |
---|
0:12:26 | so we show them the one that they were supposed to predict the fulfilment status |
---|
0:12:29 | for |
---|
0:12:31 | we had three qualified workers for utterance |
---|
0:12:34 | this is really annoying |
---|
0:12:38 | sorry |
---|
0:12:40 | we ask him to label whether the desired expression was fulfilled in to mark the |
---|
0:12:44 | textual evidence |
---|
0:12:48 | so |
---|
0:12:50 | we got agreement almost have a mechanical turk we did three qualify the workers the |
---|
0:12:55 | kind of make sure they could read english and that they would kind of paying |
---|
0:12:59 | attention to the task that's typically what we do when we have a task like |
---|
0:13:02 | this is that we |
---|
0:13:03 | we put it out a lot of people we see who does a good job |
---|
0:13:06 | and then we |
---|
0:13:07 | go back to the people that have done a good job and we say will |
---|
0:13:10 | give you this task exclusively we pay then |
---|
0:13:13 | well |
---|
0:13:14 | and then we'll and then go off and do it for whatever it takes like |
---|
0:13:18 | a week or two |
---|
0:13:20 | and |
---|
0:13:22 | we got on the stuff that we put out there we got that there were |
---|
0:13:25 | a seventy five percent of the |
---|
0:13:28 | but to start were fulfilled sixty seven percent ground of l and forty one percent |
---|
0:13:32 | of them |
---|
0:13:33 | where and don't from the context |
---|
0:13:36 | if i'm not in presentation the next |
---|
0:13:39 | i shows okay so |
---|
0:13:42 | the |
---|
0:13:43 | one thing to notice is that the verbal pattern itself harold's |
---|
0:13:47 | the outcome so what how you express the desire is often |
---|
0:13:52 | kind of conditioned on whether the desired actually fulfilled so if you look at |
---|
0:13:57 | does decided to |
---|
0:13:59 | decide it's you kind of |
---|
0:14:00 | implicates that the desire is gonna be fulfilled |
---|
0:14:03 | if you use the a word like hope to it implicates that the desire is |
---|
0:14:07 | not gonna be fulfilled |
---|
0:14:08 | but there is but something like wanted to |
---|
0:14:11 | it's more around fifty are needed to so there's a there's a prior distribution that |
---|
0:14:16 | is associated with the selection of the |
---|
0:14:18 | of verb form |
---|
0:14:21 | and so what we have like this database you know like a set you can |
---|
0:14:24 | download it |
---|
0:14:25 | we think it's really lovely |
---|
0:14:27 | testbed remotely desired can personal narrative in their fulfilment |
---|
0:14:32 | it's very open domain we have the prior in the pos context we have pretty |
---|
0:14:36 | reliable |
---|
0:14:37 | annotation so that's one of our contributions is just |
---|
0:14:40 | a corpus |
---|
0:14:42 | so accented talk about the experiments we did so we define feature sets motivated by |
---|
0:14:47 | narrative structure some of these features were motivated by the previous work by |
---|
0:14:52 | garland right off and by think it is |
---|
0:14:55 | chatter various experiments |
---|
0:14:57 | and then we ran another class different kinds of classification experiments |
---|
0:15:02 | to test whether we can actually predict desire that |
---|
0:15:06 | fulfilled desire and we also apply our models to chatter babies data which is also |
---|
0:15:11 | publicly available and so we compare directly |
---|
0:15:15 | how our models work on their data and are data and all those datasets are |
---|
0:15:20 | are publicly available |
---|
0:15:23 | so some of our features come directly from the desire expressions are in this example |
---|
0:15:28 | eventually i just decided to speak i can't even remember what i said people were |
---|
0:15:32 | very happy |
---|
0:15:33 | and proud of me for saying what i wanted to say |
---|
0:15:35 | so the first |
---|
0:15:37 | i think that's important is that is the desire for |
---|
0:15:41 | obviously like whether it's decided to are both sure wanted to |
---|
0:15:45 | and then what we call the focal word which is the embedded for underneath the |
---|
0:15:50 | desire expressions so we pick the verb with stem it |
---|
0:15:53 | so in this case it's p |
---|
0:15:55 | we then look for other words that are related to the vocal words in the |
---|
0:15:59 | context that we look for synonyms and antonyms of the vocal words and we count |
---|
0:16:03 | whether those things occur |
---|
0:16:05 | we look for the desire subject in its mentions all the different places with the |
---|
0:16:09 | desire subject which is in our case is always first person |
---|
0:16:13 | get mentioned |
---|
0:16:14 | and |
---|
0:16:16 | and we have those features we have discourse features having to do with whether there |
---|
0:16:21 | is discourse relations explicitly stated |
---|
0:16:24 | and classify these according to their occurrence in penn discourse treebank |
---|
0:16:30 | is that we just there is an inverse indexing penn discourse treebank annotation manual that |
---|
0:16:35 | gives you all the |
---|
0:16:37 | all the surface forms that were classified as a particular class of discourse relations that |
---|
0:16:42 | we just take those from there so we have two classes violated expectation or max |
---|
0:16:47 | median expectation |
---|
0:16:49 | and we keep track of those |
---|
0:16:52 | we have sentiment flow features |
---|
0:16:54 | i miss anything |
---|
0:16:55 | we have connotation lexicon |
---|
0:16:58 | sentiment flow features |
---|
0:17:00 | so we look and see whether the over the passages that are in the story |
---|
0:17:05 | whether the sentiment changers stuff it starts positive in goes to negative or starts negative |
---|
0:17:10 | it goes to positive that's not feature that we keep track a |
---|
0:17:15 | and we have these four types of features that are motivated by the narrative x |
---|
0:17:20 | it characteristics the paper goes into detail about the appellation experiments that we do |
---|
0:17:25 | to test which kinds of features |
---|
0:17:28 | we use a d |
---|
0:17:30 | a neural |
---|
0:17:31 | network architecture that sequential architecture to do this i'm running out of time |
---|
0:17:36 | and |
---|
0:17:38 | we also compared to just plain logistic regression |
---|
0:17:43 | on the data so we have two different approaches for generating the sentence embedding jesus |
---|
0:17:48 | get caught |
---|
0:17:50 | the pre-trained skip but models which like we can combine the we concatenate the features |
---|
0:17:55 | with the with the sentence embeddings and we can use that as the input representation |
---|
0:18:00 | and we also have a convolutional neural net |
---|
0:18:03 | recursive neural net |
---|
0:18:05 | so we have it is three-layer architecture and |
---|
0:18:08 | what we do this we sequentially go through the prior context in the pos context |
---|
0:18:12 | we also did experiments where we |
---|
0:18:15 | distinguish we tell the learner whether the prior context of the pos context surprisingly to |
---|
0:18:20 | me |
---|
0:18:21 | it doesn't matter |
---|
0:18:23 | if you if you |
---|
0:18:25 | so what we do is we do that have just to remember we have eleven |
---|
0:18:28 | sentences we have five for five after the desired expression |
---|
0:18:32 | at each stage we keep the desire expression |
---|
0:18:36 | in the inputs to each time we meet in a new next |
---|
0:18:40 | sentence context |
---|
0:18:42 | and we keep the desired expression and then we kind of recursively call routing get |
---|
0:18:46 | the next one |
---|
0:18:47 | so that how we're keeping track of the context |
---|
0:18:52 | and we did some experiments on a subset of |
---|
0:18:55 | desired to be which is meant to match more closely the sense that |
---|
0:19:01 | chattered at worked on |
---|
0:19:03 | that only have a expressions that she looked at only have two minutes |
---|
0:19:07 | okay so |
---|
0:19:11 | we look we wanted to do these ablation experiments in the with these different architectures |
---|
0:19:17 | and so |
---|
0:19:19 | there are first thing compared to bag of words with skip well it shows that |
---|
0:19:23 | have been these linguistic features actually matters |
---|
0:19:26 | for the performance on this task not just the embedding |
---|
0:19:30 | so we get an overall f one of point seven for predicting |
---|
0:19:36 | fulfilment versus not fulfilment |
---|
0:19:38 | we also have results that show that |
---|
0:19:41 | are |
---|
0:19:42 | theoretically motif plane motivated claim that the prior context should matter not just the subsequent |
---|
0:19:48 | context |
---|
0:19:49 | this shows this slide shows that indeed |
---|
0:19:51 | it does improve over just having the desired that's alone to have the prior context |
---|
0:19:56 | of course if you have the whole context |
---|
0:19:58 | you can do even better |
---|
0:20:01 | and then we compare |
---|
0:20:05 | certain individual features |
---|
0:20:07 | bag of words versus all features versus just the discourse features than our best result |
---|
0:20:12 | is that just the discourse features this is actually i in my view is kind |
---|
0:20:16 | of disappointing that just the discourse features by themselves |
---|
0:20:20 | to better than all the features |
---|
0:20:23 | so if you kind of tell it paid just pay attention to these discourse features |
---|
0:20:27 | and i would consider like that |
---|
0:20:30 | interesting next thing would |
---|
0:20:31 | be to do would be to explicitly sample the corpus |
---|
0:20:36 | so that you |
---|
0:20:37 | selected stuff that didn't have the discourse features so that you could see |
---|
0:20:41 | what other features come into play when you don't have explicit discourse features they're |
---|
0:20:49 | so interestingly it's a similar features and methods achieve better results than the fulfilled classes |
---|
0:20:54 | compared to the unfulfilled class |
---|
0:20:56 | and we think that's because it's just harder to predict |
---|
0:21:01 | unfulfilled case it's more ambiguous and the annotators the human annotators that think problem |
---|
0:21:06 | and supposed to stop now |
---|
0:21:08 | those |
---|
0:21:10 | we actually kind of really surprising to us we got much better results on chat |
---|
0:21:15 | of eighties dataset then |
---|
0:21:17 | they did them |
---|
0:21:18 | cells |
---|
0:21:19 | okay so i stopped and i will take questions |
---|
0:21:24 | i will be about that slide |
---|
0:21:33 | questions please |
---|
0:21:37 | nobody |
---|
0:21:48 | right so you see it is if you looking at just verbal patterns for these |
---|
0:21:53 | designer |
---|
0:21:55 | expression give you a do you come across |
---|
0:22:00 | nonverbal patterns that might expose designer or |
---|
0:22:05 | there are there are not verbal patterns like you can easily see like if somebody |
---|
0:22:10 | says i |
---|
0:22:10 | suspected to then you could have my expectation was that |
---|
0:22:14 | or you could have |
---|
0:22:16 | but what we |
---|
0:22:17 | we so we didn't |
---|
0:22:19 | we did a search against the corpus where we pulled a bunch of those things |
---|
0:22:23 | out my expectation my goal my plan |
---|
0:22:26 | and also some of these data is like |
---|
0:22:28 | hungry thirsty tired whatever that might indicate some kind of goal |
---|
0:22:34 | and we just decided that to leave those things aside |
---|
0:22:38 | for the present but there definitely in there |
---|
0:22:40 | so if you're actually interested in those you could easily probably find those semantic in |
---|
0:22:45 | things like you know just purpose clauses lot of these contexts are in this that |
---|
0:22:51 | they don't actually have the words of you know you don't want to go somewhere |
---|
0:22:55 | you can see in order to go somewhere |
---|
0:22:57 | are you don't actually get verbal patterns so those the there are |
---|
0:23:03 | i'm just wondering how many other kinds of |
---|
0:23:06 | hundreds the might be i just think there is lots of other there's lots of |
---|
0:23:10 | other patterns and it's actually |
---|
0:23:12 | what's really interesting is how frequent want is |
---|
0:23:16 | so in this |
---|
0:23:18 | so for our data |
---|
0:23:20 | once it is the most common |
---|
0:23:22 | of the verbal patterns wanted is the most common expression |
---|
0:23:26 | and you could do quite a lot if you just look for wanted to assess |
---|
0:23:29 | it we have all these we have all these different ones and we are also |
---|
0:23:34 | able to show that they have different biases as to whether the |
---|
0:23:38 | the goal be fulfilled or not i have not just comments at the end because |
---|
0:23:43 | usually the kinds of but when you're talking about non fulfilment that's indication of |
---|
0:23:49 | expectation |
---|
0:23:52 | and i wouldn't have thought that |
---|
0:23:54 | the work decided |
---|
0:23:56 | gave generated that expectation that can get counts other words the time at your this |
---|
0:24:05 | probably due but not decided |
---|
0:24:08 | so |
---|
0:24:09 | but you |
---|
0:24:12 | sighted |
---|
0:24:14 | was unfulfilled |
---|
0:24:15 | sixty some design a design bill eighty seven percent of the time |
---|
0:24:20 | what that shows right |
---|
0:24:22 | so we strongly a strong into intuition before we put the data for annotation |
---|
0:24:28 | it's just and it is fulfilled eighty seven percent of the time which is what |
---|
0:24:32 | i would expect it's just looking at the and it's unfulfilled nine percent of the |
---|
0:24:36 | times right |
---|
0:24:37 | okay |
---|
0:25:11 | it's interesting they |
---|
0:25:13 | you could you could see a difference there i had very strong intuition that a |
---|
0:25:19 | lot of these would be interesting |
---|
0:25:20 | that they because it would be implicated so i'm actually quite interested in the fact |
---|
0:25:24 | that i used in ten percent of the time we decided to it's actually not |
---|
0:25:28 | fulfill |
---|
0:25:29 | and |
---|
0:25:31 | and there's these different cases that not fulfilment which we're looking at an art in |
---|
0:25:35 | our subsequent work |
---|
0:25:37 | like |
---|
0:25:37 | sometimes the key goal is not fulfilled because something else comes along just side actually |
---|
0:25:42 | to do something now |
---|
0:25:43 | so it's not really unfulfilled is just you kind of changed your mind like |
---|
0:25:48 | we wanted to buy |
---|
0:25:50 | playstation |
---|
0:25:51 | and we went to best buy |
---|
0:25:53 | and the |
---|
0:25:55 | we were on sale so we came home with that we |
---|
0:25:58 | so it mac if the kind of higher level goal |
---|
0:26:01 | is actually fulfil that they wanted some kind of entertainment system but the expressed desire |
---|
0:26:06 | was in fact not fulfilled those |
---|
0:26:08 | those are maybe about i don't know |
---|
0:26:11 | eight percent of the cases of the ten percent something might not |
---|
0:26:16 | okay since we're running over time should mix people forced segmentation |
---|