0:00:15 | and you |
---|
0:00:17 | thanks for coming back for this session |
---|
0:00:22 | this is work by these three students mostly sarah plenary almost entirely you can of |
---|
0:00:27 | all undergraduate students they are kind of converge to meet the same time and we're |
---|
0:00:30 | interested in this |
---|
0:00:31 | and now they're all different and they're doing other things so i'm here i'm just |
---|
0:00:35 | this person |
---|
0:00:37 | presenting us and are marginally action is voiced or cd |
---|
0:00:42 | in the next couple minutes you look up or policy is not can be offended |
---|
0:00:45 | if |
---|
0:00:46 | you look up or i don't know is it is it is a real state |
---|
0:00:48 | in united states |
---|
0:00:51 | it does exist and voices the capital of that state if you know that's anything |
---|
0:00:55 | to be part of it's a nice university a really i really enjoyed been there |
---|
0:00:58 | and i run the speech language an interactive machines group sort of in your early |
---|
0:01:02 | research group there have only been there for about two years |
---|
0:01:09 | just start |
---|
0:01:11 | actually wanted right attention to this bottom reference here this is what we're doing in |
---|
0:01:15 | this paper builds a large enough of on the nova covered all paper and this |
---|
0:01:18 | is a lot of oliver lemons lab in a real number of |
---|
0:01:23 | and they did some research on it was basically social robotics which is pretty similar |
---|
0:01:27 | to what we're doing and we follow a lot of the methodology here |
---|
0:01:31 | but what we wanted to look at and we have this little robot we wanted |
---|
0:01:34 | to do some language grounding studies with it and then one of my students asked |
---|
0:01:38 | this |
---|
0:01:39 | question |
---|
0:01:40 | that we couldn't like go of she said well are people gonna treat this robot |
---|
0:01:44 | the way we want them to treated |
---|
0:01:46 | like first language acquisition and i was thinking well |
---|
0:01:50 | i don't know maybe we should study this |
---|
0:01:52 | and that's actually what happened with this paper |
---|
0:01:56 | but a lot of motivation comes from all of a great work in grounded |
---|
0:02:01 | grounded semantics in symbol grounding of w |
---|
0:02:05 | you have lots of other people i mean are not all mentioned here but here's |
---|
0:02:09 | if you that we kind of build a focus on the point is well |
---|
0:02:14 | that's just getting to hear the point is if you're a person and you're interacting |
---|
0:02:18 | with a child |
---|
0:02:20 | and the child's learning language child doesn't know language to the degree that an adult |
---|
0:02:24 | load knows language |
---|
0:02:26 | an entire season object and the idea that when the child nodes that pretty much |
---|
0:02:30 | all l objects have had annotation |
---|
0:02:32 | i paraphrase or a single word or something and so this child sees this object |
---|
0:02:37 | here and the channel maybe doesn't know the annotation for this object and so the |
---|
0:02:40 | adults there's lots of all |
---|
0:02:44 | and the choppy numbers this and it's quite amazing and this is kind of what |
---|
0:02:49 | grounding is doing when you read it when you do this with a machine like |
---|
0:02:53 | a robot has to perceive this object somehow represent this object somehow a lot of |
---|
0:02:59 | the work up until now has been done with |
---|
0:03:01 | with vision as the main modality for grounding language into some |
---|
0:03:07 | some perceptual modality |
---|
0:03:11 | but once you have |
---|
0:03:13 | a robot once you have an embodied agents |
---|
0:03:17 | if a start assigning anthropomorphic characteristics to them based upon have a look and on |
---|
0:03:22 | a based upon how to act |
---|
0:03:24 | as soon as they see a robot like immediately think is this manner woman's tells |
---|
0:03:28 | inches it's is it sympathetic how can interact with this thing what can i expect |
---|
0:03:32 | and really as soon as someone says this is a robot people think it has |
---|
0:03:36 | a don't intelligence and you don't want that if you have a first language acquisition |
---|
0:03:40 | tasks that you want the robot to do |
---|
0:03:43 | and that was the question my student task and if we have this little robot |
---|
0:03:47 | we want to do first language acquisition task in a setting that is very similar |
---|
0:03:50 | to the way children acquire their language |
---|
0:03:54 | we have to we cannot assume that the that people who interact with the robot |
---|
0:03:59 | are gonna treated likely what child |
---|
0:04:01 | and so that's what we set off to do so we want to actually projects |
---|
0:04:05 | are what age do anything a robot user but the academic level that's what we're |
---|
0:04:10 | working on here so the main research question is this does the way are about |
---|
0:04:14 | the way a robot verbally interact affect how humans perceive the age of robot |
---|
0:04:19 | short answers yes it's after wants to go ahead and put your head down |
---|
0:04:23 | have a little rest if you don't care about rest but we can we can |
---|
0:04:26 | sort of jesus apart a little bit kinda show you what we did |
---|
0:04:31 | we didn't experiment |
---|
0:04:34 | we have some robots one of the very the appearance of how looked |
---|
0:04:39 | and it's three different ones which will show you moment we varied the way the |
---|
0:04:43 | robot's verbally interacted |
---|
0:04:45 | and we as participants sure robot how to build a simple possible that was kind |
---|
0:04:50 | of the language on task force running wasn't actually have |
---|
0:04:54 | but they were there are interacting with the robot's in this very simple dialogue setting |
---|
0:04:58 | and we recorded the participants there what we had a camera pointed out them as |
---|
0:05:04 | they were interacting with these robot some record their speech interface |
---|
0:05:07 | maybe then after they interact with each row about their thought a questionnaire about the |
---|
0:05:12 | perceptions |
---|
0:05:14 | and then after gathered all this data we analyse it and we're well we recorded |
---|
0:05:19 | the data analyzed it with the facial motions prosody linguistic complexity |
---|
0:05:24 | and we found correlations between data and the perceived age |
---|
0:05:27 | and we from that we can predict |
---|
0:05:31 | so it is a three robots were used |
---|
0:05:34 | because we had |
---|
0:05:36 | and because we wanted a robot that was kind of anthropomorphic in one that wasn't |
---|
0:05:39 | so here is a non anthropomorphic robots could work it looks basically like a rumbling |
---|
0:05:43 | with broken act on it |
---|
0:05:45 | and then this is on keys cosmo i don't know if you see in itself |
---|
0:05:48 | very small robot it's marketed as a choice has a nice python is to k |
---|
0:05:51 | and then we just had an uninvited on physical spoken dialogue system which we |
---|
0:05:58 | which we affectionately named an overall |
---|
0:06:00 | an overall not a robot |
---|
0:06:03 | so there are three robots |
---|
0:06:06 | and it's kind of embarrassing what we did with the robot's but we have to |
---|
0:06:10 | squeeze settings that we wanted to test because we want to see how do how |
---|
0:06:13 | to people treated based on how this robot interaction from |
---|
0:06:16 | and the only things we only speech that we have a robust produce was feedback |
---|
0:06:20 | and there were two settings of this feedback one was minimal feedback on like yes |
---|
0:06:25 | okay |
---|
0:06:26 | which was basically marking phonetic receipt we call this the lower low setting like i |
---|
0:06:30 | heard that i heard i heard but whether or not it understood that's kind of |
---|
0:06:35 | a one year |
---|
0:06:37 | and then we had another feedback which mark semantic understanding much sure okay i see |
---|
0:06:41 | a higher-order repeat or something like to show i understood correctly understood you these are |
---|
0:06:48 | they're all feedback like it's not really in taking the floor it's not really doing |
---|
0:06:51 | anything really a lot of dialogue going on here but there's these two settings and |
---|
0:06:57 | then we found that it makes quite a difference these settings |
---|
0:07:00 | other than that the robot student move |
---|
0:07:03 | which from the kabuki thing on the light was on an that was a |
---|
0:07:07 | non-causal had a in its default setting it had this little animated eyes are just |
---|
0:07:10 | kind of the round but they didn't it into anything it and move it and |
---|
0:07:14 | per participant the task i think there just talking |
---|
0:07:18 | until this we have six settings where three robots into speech settings |
---|
0:07:23 | so the task was this we had a we had a we had we'd set |
---|
0:07:27 | a robot down right here |
---|
0:07:29 | whether the cookie the |
---|
0:07:31 | the cosmo robot or we just not have anything there for the no problem setting |
---|
0:07:35 | and then we have these cameras here record participant |
---|
0:07:38 | and we had these just ask for with this also we have these little puzzle |
---|
0:07:42 | pieces and don't know if you recognise them |
---|
0:07:44 | on this paper there's three different target shapes that they can like with these three |
---|
0:07:48 | pieces in each of these shapes had a name |
---|
0:07:52 | the only instructions we gave these participants was |
---|
0:07:54 | sure about how to build these each of these each of these shapes make sure |
---|
0:07:58 | at the and you tell the robot what the name is |
---|
0:08:01 | and just using one after another |
---|
0:08:03 | and what would happen is as they interact with the robot the robot would give |
---|
0:08:07 | some feedback depending on the setting as they're talking to its own kind of interacting |
---|
0:08:12 | with the but of course it was controlled by wizard |
---|
0:08:15 | so the procedure went like this we randomly |
---|
0:08:18 | but a robot here that interact with it |
---|
0:08:21 | based on a questionnaire about this interaction and then we give them a new set |
---|
0:08:24 | of |
---|
0:08:25 | puzzle tiles on a new list of |
---|
0:08:27 | target shapes |
---|
0:08:28 | it interactive that robot have a quite a questionnaire again for that interaction and then |
---|
0:08:33 | they'd have the third robot |
---|
0:08:34 | with a new set of shapes and possible target shapes on and then that thought |
---|
0:08:39 | for questionnaire |
---|
0:08:41 | the things we randomly assign was the robot presentation order you order of the puzzle |
---|
0:08:44 | we had a different we had two different voices for the codebook in the spoken |
---|
0:08:48 | dialogue system from amazon was a male and female voice that was randomly assigned words |
---|
0:08:54 | "'cause" my head it's had its own voice |
---|
0:08:56 | and then we had a different language setting |
---|
0:08:59 | so that the high and low language which stay the same for all three interactions |
---|
0:09:02 | we just sort flip a coin beginning and then they would get that one for |
---|
0:09:06 | all three of them |
---|
0:09:08 | and so we collected data from the camera facing the participants that which was audio |
---|
0:09:12 | and video and then of course the questionnaire |
---|
0:09:16 | in the end we got one participant send mail and eleven female what we can |
---|
0:09:19 | further time |
---|
0:09:21 | and each interact with all three robots folding sixty three interactions we collected and fifty |
---|
0:09:25 | eight questionnaires for had to be thrown out because you want correct for correctly filled |
---|
0:09:29 | out |
---|
0:09:31 | and then we move interested data analysis |
---|
0:09:36 | for each interaction |
---|
0:09:38 | with individual robot's we would take a snapshot every five seconds and averaged over the |
---|
0:09:42 | emotion distribution from the microsoft emotions a few not familiar with this api |
---|
0:09:46 | you can send the actual like this the eight k and i will give you |
---|
0:09:49 | just |
---|
0:09:50 | i is a different emotions |
---|
0:09:52 | so here's an example here someone kind of mostly neutral |
---|
0:09:56 | little bit is spread over the other ones you're someone who's happy little bit is |
---|
0:09:59 | a the other ones |
---|
0:10:00 | there's some of these mostly neutral but there's more contemporary look at that you're like |
---|
0:10:03 | this contempt there and the content actually came up a little bit in our in |
---|
0:10:08 | our study so we collected the state |
---|
0:10:13 | and just to give you some numbers here about what we found of emotions most |
---|
0:10:17 | of the time people were in their in we're just neutral and then about eleven |
---|
0:10:21 | percent of time they were enhanced eight times that surprising content for the next most |
---|
0:10:25 | common ones |
---|
0:10:27 | and then the other ones were negligible less than one percent on average for all |
---|
0:10:30 | for all settings all robots everything |
---|
0:10:33 | but then we |
---|
0:10:35 | the robot's in the different settings individually so if you marginalise out the robot's and |
---|
0:10:39 | just look at low and high setting we find that people spend a lot more |
---|
0:10:42 | time being happy with the robot's then in the high setting |
---|
0:10:46 | and this just getting given genetic receipt it's and part of this is |
---|
0:10:51 | in the high setting it's marking that are semantically understood you and people got really |
---|
0:10:55 | frustrated with "'cause" expected more interaction from the robot's but they weren't they are doing |
---|
0:10:59 | more than just giving this verbal feedback |
---|
0:11:02 | so you want very happy with a with any role and i said |
---|
0:11:08 | and that's kind of the dictate come here the robot's themselves a little more happiness |
---|
0:11:12 | with cosmo they would rather interact slightly with a with a |
---|
0:11:16 | and in by a spoken dialogue system then with a codebook e |
---|
0:11:19 | for whatever reason |
---|
0:11:22 | and you can sort of tease apart but them in their individual settings here |
---|
0:11:26 | all refer to just a paper to get |
---|
0:11:29 | you dig in the more detail |
---|
0:11:31 | we looked at prosody the very simply just for each interaction we average the f |
---|
0:11:35 | zero for the entire actually might have in about you know a couple minutes of |
---|
0:11:39 | speech and just the just the participant would not the robot |
---|
0:11:43 | and here some results for that's a if you just |
---|
0:11:47 | if you just look at should marginalise out the robot's in the low setting people |
---|
0:11:52 | had a higher pitch |
---|
0:11:54 | where is not have setting at all |
---|
0:11:56 | the location this kind of goes with you know literature of people who talk to |
---|
0:12:01 | children raise their voice is a little bit that's kind of what we want |
---|
0:12:05 | but even the small difference in feedback next that of the pitch difference |
---|
0:12:10 | in all the robot's and then |
---|
0:12:13 | if you just look at cosmo on the low and high setting or marginalise out |
---|
0:12:17 | the low and high setting you just look at the robot's people talked with got |
---|
0:12:21 | to discount the robot at a very high much higher pitch than the other two |
---|
0:12:24 | about these were kind of negligible |
---|
0:12:26 | is a kind a negligible neither a little bit different but i mean not a |
---|
0:12:29 | whole lot of different so |
---|
0:12:31 | the way the robot looks the way the robot talks on prosody kind of tells |
---|
0:12:35 | us that |
---|
0:12:36 | both make a difference here |
---|
0:12:41 | we then |
---|
0:12:43 | for each user interacts with transcribed speech using speech at a time courses can make |
---|
0:12:47 | some mistakes but we just kind of one with it |
---|
0:12:50 | segments the transcriptions into sentences by detection one sec selsa pretty |
---|
0:12:56 | pretty rough the way we did this we didn't taken to it too much we |
---|
0:12:59 | just sort of to check these transcriptions and passed through some tools that gave us |
---|
0:13:05 | some lexical complexity and syntactic complexity so we have |
---|
0:13:10 | lexical complexity analyze which causes lexical diversity means segmented type token ratios m s two |
---|
0:13:17 | t r and lexical sophistication |
---|
0:13:19 | these are nice measures that we can use and then we have |
---|
0:13:23 | for syntax for syntactic complexity we use the do you level analyser which is just |
---|
0:13:27 | a value between zero and seven |
---|
0:13:29 | zero meeting it's a very short you know one words to words sentence very syntactically |
---|
0:13:35 | simplistic and then but seven means it's a long sentence with a lot of complexity |
---|
0:13:42 | with the with the l d the ls nasty are it's very simple the process |
---|
0:13:48 | very similar to the results we get for prosody |
---|
0:13:50 | in the low setting people use very complex lexical word that very complex vocabulary the |
---|
0:13:57 | thing that was surprising that i want to show you here is the these syntactic |
---|
0:14:00 | complex its complexity and the low setting we have higher syntactic complexity we have more |
---|
0:14:06 | l seven more longer sentences versus high setting |
---|
0:14:10 | i mean for the most part they're saying very short one to word sentences in |
---|
0:14:15 | all settings with all robots button some cases there there's speaking on their speaking longer |
---|
0:14:20 | sentences we dug into just a little bit and we found some literature that serves |
---|
0:14:24 | in this is kind of what we what we found in our data |
---|
0:14:27 | in the low setting its get its infinite it receives not semantic understanding it's not |
---|
0:14:31 | signalling semantic understanding so they just kind of kept talking |
---|
0:14:36 | the sentences got the since in text more complex even if the vocabulary was press |
---|
0:14:40 | so the other measures |
---|
0:14:43 | low lexical sophistication but high some syntactic complexity because they just they just kept talking |
---|
0:14:53 | looking at the questionnaires for each interaction with the gaussian question hermit just one contrast |
---|
0:14:58 | in parents each with a five point scale between your some examples artificial life like |
---|
0:15:03 | unfriendly versus friendly in congress competent confusing versus clear |
---|
0:15:09 | and then we add the following two questions which was the information we are interested |
---|
0:15:12 | in |
---|
0:15:13 | if you could give the robot who interacted with a human age how old would |
---|
0:15:17 | you say |
---|
0:15:19 | we've been than the ages in these ranges we have under two to five six |
---|
0:15:22 | twelve thirteen seventeen eighteen to twenty four twenty five thirty four thirty five we know |
---|
0:15:27 | that thirty five and all there is a pretty much pronounced speech thing |
---|
0:15:32 | what level of education would be appropriate for the robot who interacted with sort of |
---|
0:15:36 | another proxy to age and we said preschool kindergarten each of each grade had its |
---|
0:15:42 | own value and then of course is |
---|
0:15:47 | so just looking at that time just |
---|
0:15:50 | the questionnaires on their own people assigned |
---|
0:15:54 | the low setting here |
---|
0:15:56 | people assigned you know on average lower ages and the high setting on average higher |
---|
0:16:00 | ages sets kind of expected and then looking at the robot's |
---|
0:16:05 | you know codebook you got could work in the no rollback high rate word with |
---|
0:16:09 | uninvited robot gets higher stage i think it's the sort of the most |
---|
0:16:14 | intelligent the smartest the oldest and then we get calls more here which is like |
---|
0:16:18 | the oldest six to twelve |
---|
0:16:21 | not surprising and education which tells a similar story |
---|
0:16:26 | you have the low setting on average much lower much younger |
---|
0:16:30 | what muscle or education rather high setting in and the difference is not much right |
---|
0:16:34 | it's just |
---|
0:16:36 | phonetic greasy verses signalling semantic understanding is just a different feedback strategy but makes a |
---|
0:16:41 | huge difference |
---|
0:16:42 | and then of course the robot's people treat them differently |
---|
0:16:46 | where you have the highest cosmo gets is a tenth grade and then the other |
---|
0:16:50 | ones get undergraduate |
---|
0:16:54 | and that was put the what we found from the questionnaires together with the some |
---|
0:16:59 | of some of the other features that we had i want to point out a |
---|
0:17:02 | few things here |
---|
0:17:04 | in the low setting if you look at prosody the f zero average you can |
---|
0:17:07 | look at is a questionnaire values and as both go up to correlate with each |
---|
0:17:11 | other |
---|
0:17:12 | so high if you have a higher pitch it means we think your friend you're |
---|
0:17:15 | intelligent kind of conscious knowledge and if you have a high or low complexity we |
---|
0:17:19 | think a more friendly |
---|
0:17:21 | in the high setting different things come up here sensible enjoyable natural human like and |
---|
0:17:27 | then lexical diversity |
---|
0:17:28 | and lexical |
---|
0:17:30 | sophistication this one i think is interesting in the high setting |
---|
0:17:33 | he if i'm using more complicated |
---|
0:17:37 | words to talk to the robot |
---|
0:17:39 | it is more likely from into this at about the robot and to be contentious |
---|
0:17:43 | against the robot |
---|
0:17:45 | in a big white at that was the interesting result people have high expectations of |
---|
0:17:49 | these other of the robot in the high setting |
---|
0:17:51 | well you understood me well same or do more they would they were asked followup |
---|
0:17:56 | questions and it would we wasn't allowed to say anything other than sort of |
---|
0:18:00 | given these simple feedback |
---|
0:18:03 | some other stuff which kind of gives |
---|
0:18:04 | tells a similar sort of look at the robot's instead of just the little high |
---|
0:18:07 | setting |
---|
0:18:10 | kind of the same thing sinus |
---|
0:18:13 | sinuses negatively correlated here with them as you are on the other robots have some |
---|
0:18:17 | things as well |
---|
0:18:19 | and this feature is negatively correlated with the low stage will the second most was |
---|
0:18:23 | represented there |
---|
0:18:24 | so we can begin in this little bit more the paper |
---|
0:18:30 | so to predict the perceived age an academic levels now that we have this data |
---|
0:18:33 | we want to use our prof prosodic linguistic and |
---|
0:18:40 | language features |
---|
0:18:41 | what prosodic you motion language features to predict |
---|
0:18:45 | the age and so you fifty eight data points five fold cross validation and we |
---|
0:18:50 | just use a simple logistic regression classifier |
---|
0:18:54 | nothing terribly complicated here not very much data if we use all seven labels we |
---|
0:18:59 | don't you very well if we find a splitting criterion say okay let's split |
---|
0:19:03 | at eighteen years old and see how well it does |
---|
0:19:06 | we can predict fairly well if someone thinks that a robot is of minor or |
---|
0:19:11 | an adult |
---|
0:19:13 | and for academic level we can we kind of the same thing and we found |
---|
0:19:16 | that we can split preschool with reasonable accuracy and |
---|
0:19:21 | so we can tell if someone thinks a robot is a preschool age so taken |
---|
0:19:26 | together you can tell someone if someone tree is |
---|
0:19:29 | assigning adulthood or minor her to adapt to a robot and if they're furthermore assign |
---|
0:19:34 | preschool academic level to the robot and that's actually what we want to do we |
---|
0:19:39 | want to be able to determine do they think my robot is preschool age of |
---|
0:19:43 | the language learning stage |
---|
0:19:46 | so it's and this we did some other stuff along with this stuff that i |
---|
0:19:51 | showed you that confirms the stuff of nova coveralls april actually still workshop |
---|
0:19:57 | where the robot verbally interact this is just a back again the way it looks |
---|
0:20:00 | changes the way human participants perceived robust agent academic levels |
---|
0:20:05 | perceived age academic level can be predicted using multiple features future work is what we've |
---|
0:20:10 | kind of verify because most the right of robot for the job for first language |
---|
0:20:15 | acquisition test and it doesn't look like human which |
---|
0:20:19 | we |
---|
0:20:20 | people don't wanna look at you and we thank you for your attention |
---|
0:20:48 | i was sick you know i was curious why use really dh i don't want |
---|
0:20:53 | nine or by data preschool is really small children cell |
---|
0:21:01 | it would have sleep day |
---|
0:21:03 | education level in many different ways right |
---|
0:21:08 | we worked |
---|
0:21:11 | we did try a couple of quite things and rely on that one work but |
---|
0:21:14 | also make kind of sense |
---|
0:21:16 | i don't versus models that's seems like a reasonable splitting criteria let's use that of |
---|
0:21:21 | course it's not the one we're looking for which is |
---|
0:21:23 | i think initially to make a child and that's what the preschool one does pretty |
---|
0:21:27 | well |
---|
0:21:28 | it just words that's the way i'm sorry |
---|
0:21:44 | when you have this |
---|
0:21:46 | chart of the predicted a for the low and high level kind of |
---|
0:21:51 | looked |
---|
0:21:54 | two of the |
---|
0:21:55 | the parser |
---|
0:21:57 | it was lots of there is no the u |
---|
0:22:00 | this one where i can read it was |
---|
0:22:03 | below is |
---|
0:22:06 | more likely to be perceived as being a child but also more likely to be |
---|
0:22:09 | perceived as being able to just |
---|
0:22:11 | is really unlikely to be a teenager |
---|
0:22:15 | there's these pesky |
---|
0:22:17 | undergraduate assignments |
---|
0:22:19 | so that to for example well i mean in general they both get a look |
---|
0:22:24 | at the academic level here okay i got quantities and then this one gets an |
---|
0:22:29 | additional to have great one but i mean there's a lot more like preschool here |
---|
0:22:32 | as it is more you know can and first grade stuff here i mean |
---|
0:22:36 | on average it is quite a bit more but there are some people who |
---|
0:22:40 | hi how that's why that's quite interesting and there are a |
---|
0:22:57 | i may have missed something in are also in your |
---|
0:23:01 | questionnaire supply |
---|
0:23:04 | saying |
---|
0:23:06 | people |
---|
0:23:09 | expectation |
---|
0:23:11 | the robot |
---|
0:23:13 | i was wondering give |
---|
0:23:15 | this was what people told you or |
---|
0:23:19 | your explanation of the data based on other kinds of things that we found a |
---|
0:23:27 | assess that the robot's as knowledgeable and so on some more |
---|
0:23:30 | i and experiment together so the iq stuff is what they said |
---|
0:23:37 | questionnaires the other stuff like the p the l and the ear |
---|
0:23:43 | right so q means a came from the questionnaire you means a came from the |
---|
0:23:47 | motion stuff that we got from the microsoft emotions api that we just read off |
---|
0:23:50 | of it |
---|
0:23:51 | so we have what they're telling as we have we're getting from just data we |
---|
0:23:55 | collected from mister the correlations we collected from that so it does it is our |
---|
0:23:59 | interpretation like with this set content we're saying okay a high setting |
---|
0:24:05 | we detected from them that the use i lexical diversity using our tools |
---|
0:24:10 | and so collected from than that they had high said that the content from my |
---|
0:24:14 | something like so in this case those things were correlated but in this other stuff |
---|
0:24:18 | this is what they reported like i they thought it was enjoyable my sensible |
---|
0:24:22 | whatever and in this case like in the low setting highlight when it was high |
---|
0:24:27 | lexical sophistication they questionnaire they would've given a high score on the questionnaire |
---|
0:24:34 | it's a testable |
---|
0:24:36 | so i understand that vectors |
---|
0:24:40 | really |
---|
0:24:42 | you know |
---|
0:24:46 | yes okay |
---|
0:24:52 | a common |
---|
0:24:53 | thank you |
---|