0:00:15 | how everyone a man in style i'm a student is the university and i'll be |
---|
0:00:20 | discussing some joint work between my collaborators that you know p group and also the |
---|
0:00:25 | for research and relations that are |
---|
0:00:27 | and i guess before i actually get started i think this target can be pretty |
---|
0:00:31 | deep learning happy so |
---|
0:00:33 | before you kind of star trek may resume you know as like the harbinger of |
---|
0:00:36 | everything is bad and bad in dialogue today i'm and you to learn as well |
---|
0:00:40 | i wanna talk you know please lets visible about this so |
---|
0:00:45 | just come isn't that is |
---|
0:00:47 | before i get started i like to kind of take a step back |
---|
0:00:51 | and discuss some of what i think like the larger motivations of dialogue research arm |
---|
0:00:55 | and to do that i'd like to talk about a film her which somebody may |
---|
0:00:59 | be seen |
---|
0:01:00 | in it the protagonist played by walking unix essentially develops an infant relationship in the |
---|
0:01:06 | sparse feature world with his super intelligent assistance amanda |
---|
0:01:10 | and how do we estimate the so feeling is her charisma her really to |
---|
0:01:15 | conduct very intelligible conversations |
---|
0:01:17 | and why would you necessarily swell the details of the movie i would like to |
---|
0:01:22 | say that i think it does a fantastic job of illustrating really what is at |
---|
0:01:25 | the core of a lot of dialogue research i think on the one and we |
---|
0:01:28 | do we are trying to build very practically useful agents try we're trying to build |
---|
0:01:32 | things that people can use on a daily basis |
---|
0:01:34 | but i think more broadly |
---|
0:01:35 | i think we also should be trying to build a just edible compassionate and pathetic |
---|
0:01:39 | relatable collaborative i think in doing so will learn a lot of ourselves what we |
---|
0:01:45 | as humans are what makes as human what's the core of our of our humanity |
---|
0:01:49 | and so i think this is that this is due motive is something that i |
---|
0:01:53 | think she got a lot of dialogue research and certainly guys a lot of us |
---|
0:01:56 | of the that i |
---|
0:01:57 | well like to do |
---|
0:01:59 | moving now into the actual talk itself |
---|
0:02:03 | a quick roadmap i'm gonna be discussing some background to this work a i'll be |
---|
0:02:07 | discussing the model that we developed a dataset we also developed |
---|
0:02:12 | also the experiments that validated sort of the approach and some concluding remarks |
---|
0:02:19 | so background |
---|
0:02:22 | if we take this snippet of dialogue between a human asking a sort of fairly |
---|
0:02:26 | simple query you know what time is my doctor's appointment |
---|
0:02:30 | we would like an agent to be able to just to answer the query with |
---|
0:02:33 | reasonable effectiveness and say something to be effective your appointment is a three point one |
---|
0:02:37 | thursday |
---|
0:02:38 | the traditional dialogue systems tend to have a lot going on in the back end |
---|
0:02:42 | we have a number of modules that in various things including actual and understanding interfacing |
---|
0:02:47 | with some sort of a knowledge base and then obviously a natural and generation |
---|
0:02:52 | in tradition we have a separate modules that are doing all these things together and |
---|
0:02:56 | and often times can be very difficult to make a smooth interaction between all these |
---|
0:03:00 | different modules |
---|
0:03:01 | and so i think the problem is of a lot of present the enrolled dollar |
---|
0:03:04 | researchers will is that will be able to kind of an automated |
---|
0:03:06 | some |
---|
0:03:07 | really all of all these separate modules and with is affected and doesn't really |
---|
0:03:12 | limit performance |
---|
0:03:15 | more specifically i think that one of the big challenge is that a lot of |
---|
0:03:18 | present in all dialogue systems suffer from is interfacing with the knowledge base itself |
---|
0:03:23 | and so |
---|
0:03:24 | really the kinds of things we would like to see is sort of a smooth |
---|
0:03:28 | interaction involves heterogeneous components and we could replace these all these separate you know hardworking |
---|
0:03:33 | the robot's with one make a robot i the end-to-end dialog system then |
---|
0:03:37 | maybe we're getting some sort of progress |
---|
0:03:39 | this is of course |
---|
0:03:40 | i have a suitable it may be would like to work towards |
---|
0:03:45 | so the purposes of this work i guess so first discuss |
---|
0:03:49 | some previous work has been done in this in general in this general line of |
---|
0:03:52 | enquiry |
---|
0:03:53 | so some work from when it all has sought to essentially take the traditional modular |
---|
0:03:58 | connected paradigm and replace some or all the components with the neural acquittal and |
---|
0:04:04 | other work has tried to |
---|
0:04:06 | kind of enhanced these soft the kb lookups and interaction of the kb through some |
---|
0:04:10 | sort of soft operation that still maintained some sort of belief state tracking |
---|
0:04:16 | there's another line of work the kind of tries to find a middle ground |
---|
0:04:19 | that try seek the best of sort of the rule based heuristic systems |
---|
0:04:22 | and the more neural that is still not able to neural training |
---|
0:04:27 | and then there's some work that we kind of been pursuing in the past |
---|
0:04:29 | that seeks to |
---|
0:04:31 | bill some sort and then system that's builds of the traditional c to seek paradigm |
---|
0:04:35 | and is able to enhance the paradigm with some mechanisms that actually one more effective |
---|
0:04:40 | dialogue exchanges |
---|
0:04:43 | the motivation then of our work is twofold |
---|
0:04:45 | one we would like to develop some sort of a system that can interface with |
---|
0:04:48 | the knowledge base in a more or less intense fashion without the need for explicit |
---|
0:04:53 | training of believers like trackers |
---|
0:04:55 | and i think a sample of that is then how we get a sequence the |
---|
0:04:59 | sequence architectures this purported architecture to interact nicely with some intrinsically structure information you know |
---|
0:05:05 | we're talking about it |
---|
0:05:06 | sequential model |
---|
0:05:07 | combining with this more like structured representation |
---|
0:05:10 | and |
---|
0:05:11 | getting there's to work together is something that i think is gonna be a challenge |
---|
0:05:14 | going forward |
---|
0:05:17 | some details on the model |
---|
0:05:20 | so first steps i don't know what people's general material acoustic models but |
---|
0:05:25 | the encoder decoder with attention framework is one is investigated a number of different works |
---|
0:05:30 | and for the purposes of dialog evolves more less the exact same starting paradigm the |
---|
0:05:34 | same general back on the encoder side we're basically heating in a single token of |
---|
0:05:39 | of dialogue context one of the time through a recurrent unit highlighted in blue |
---|
0:05:44 | and one travelling the recurrent for some number of times that's |
---|
0:05:48 | and after some number of computations we get the hidden state that is initial that |
---|
0:05:51 | is used to initialize the decoder which also the recurrent unit and is also relevant |
---|
0:05:55 | for some number of time steps |
---|
0:05:57 | at each step of the decoding we're gonna be referring back to the encoder and |
---|
0:06:01 | essentially computing some sort of a distribution |
---|
0:06:04 | over the various tokens of the encoder |
---|
0:06:06 | and this will be used to generate a context vector that then is combined with |
---|
0:06:10 | the decoder hidden state to form a distribution over possible capture tokens that we can |
---|
0:06:16 | arg max over and essentially new our system response for |
---|
0:06:22 | sewing with this general background i liked hypothesize that in principle we should be able |
---|
0:06:26 | to just like take this decoder hidden state that we already computing at a given |
---|
0:06:29 | timestep just move that one step further and say hey uses exact same decoder hidden |
---|
0:06:34 | state to compute some sort of an attention over the rows of a knowledge base |
---|
0:06:38 | so that the question is how do we actually represent the knowledge base in such |
---|
0:06:41 | a way that this is actually feasible i mean we're eigen can talking about structure |
---|
0:06:44 | information and we're trying to deal with in some more of a sequential fashion to |
---|
0:06:48 | we are interested sequence |
---|
0:06:52 | so again this is the question is really guarding the |
---|
0:06:56 | the work is how can we were represent a cave effectively |
---|
0:06:59 | to do so we draw information are inspiration from |
---|
0:07:04 | the key value memory networks of millard all which essentially showed that he value representation |
---|
0:07:08 | which |
---|
0:07:09 | not only is kind of a nice |
---|
0:07:11 | elegant design paradigm but also |
---|
0:07:13 | can we can directly be shown to be quite effective a number different tasks |
---|
0:07:18 | maybe something helpful for us |
---|
0:07:20 | so the show how this actually would it play out for our purposes i mean |
---|
0:07:25 | taking one row of a kb and show how were trying to transform into something |
---|
0:07:28 | that is amenable to keep value representation |
---|
0:07:33 | so consider this a single row of a look at here we're talking about a |
---|
0:07:37 | calendar scheduling task |
---|
0:07:39 | and we have some |
---|
0:07:40 | these the structure information |
---|
0:07:42 | and we want to convert that into essentially what is the subject relation object a |
---|
0:07:46 | triple format |
---|
0:07:47 | and so here what we're doing is we have some event the dinner |
---|
0:07:51 | which is connected to a number of different items in a backs about the dinner |
---|
0:07:56 | through some relation so you have some time which can be relations and data which |
---|
0:08:00 | simulation et cetera et cetera |
---|
0:08:02 | and everything is information that is originally represented in the in the role of the |
---|
0:08:06 | cup knowledge base is now collapsed into triple format |
---|
0:08:11 | and so this is the first sort of a operation that we're gonna work with |
---|
0:08:17 | going from the subject relation object triple format |
---|
0:08:20 | we then |
---|
0:08:21 | make just one small change which converts into a key values store |
---|
0:08:26 | taking the subject a relation and essentially concatenating it to form a sort of canonical |
---|
0:08:30 | as representation that is our key |
---|
0:08:33 | that is sort of exactly what we're trying to do |
---|
0:08:35 | so |
---|
0:08:35 | if you look the first row we had the simulation object with for the dinnertime |
---|
0:08:40 | an eight p m |
---|
0:08:41 | and |
---|
0:08:42 | this subject relation essentially become this new not realised make a that's a make a |
---|
0:08:47 | key called dinnertime for lack of about word and the object is just mapped one |
---|
0:08:52 | to one to the value |
---|
0:08:54 | and we do the same for every single other row in the original |
---|
0:08:57 | is a row a triple format |
---|
0:09:00 | and so because we're dealing with embeddings |
---|
0:09:02 | the keys in this case and that being just the sum of these subjects relation |
---|
0:09:06 | embeddings |
---|
0:09:06 | so dinnertime this case is just litter the sum of the gender bending and the |
---|
0:09:09 | time adding |
---|
0:09:10 | and |
---|
0:09:11 | an important detail is now one word doing some sort of decoding |
---|
0:09:14 | we're all argmax sing over an augmented vocabulary |
---|
0:09:17 | which includes not only the original vocabulary that we started off with but now also |
---|
0:09:21 | these |
---|
0:09:22 | these additional canonical as a key representations |
---|
0:09:28 | when we put it all together we |
---|
0:09:29 | have essentially again well we start out with which was the sink or decode with |
---|
0:09:33 | the tension framework |
---|
0:09:34 | but now we filled in this attention over the over the knowledge base |
---|
0:09:39 | we compute some weight over every single role of the knowledge base |
---|
0:09:43 | and so for example in the case of something like you know the football time |
---|
0:09:47 | at two p m |
---|
0:09:47 | that's visible |
---|
0:09:50 | there's no weights that is there is used to await the appropriate entry in this |
---|
0:09:54 | case the football time cannot representation in the distribution of you are mixing or |
---|
0:10:00 | we do this essentially for every single row of the |
---|
0:10:04 | of the of the new canonical eyes kb |
---|
0:10:07 | for that |
---|
0:10:09 | and this essentially is adjusted model |
---|
0:10:14 | moving on |
---|
0:10:16 | the dataset that we used because |
---|
0:10:19 | i mean first off i guess a quick no data scarcity the obvious ignition a |
---|
0:10:22 | lot about research especially when we're talking about the neural dialogue models that are that |
---|
0:10:26 | a lot of people are dealing with you know it seems that more data often |
---|
0:10:30 | helps but |
---|
0:10:31 | given that are collaborations one with for which obvious is a for company is a |
---|
0:10:35 | car company and hence |
---|
0:10:37 | the really only interested in things really it requires |
---|
0:10:40 | we had to go about building since the new data set |
---|
0:10:43 | that would be |
---|
0:10:44 | and then able to still being able as the same question that we want to |
---|
0:10:46 | ask about knowledge bases but is kind of more relevant to their use case |
---|
0:10:50 | so that in the being the in car virtual assistant domain |
---|
0:10:53 | so here i three sub domains there were interested in our scheduling calendar scheduling whether |
---|
0:10:59 | and then point of interest navigation |
---|
0:11:04 | the way we wanna by collecting data set |
---|
0:11:06 | essentially you see whether masking which is adapted from the work of one at all |
---|
0:11:10 | and it essentially what we're doing is we have |
---|
0:11:14 | crowdsource workers |
---|
0:11:14 | that are playing one of human essentially they can either be the driver or the |
---|
0:11:19 | car systems |
---|
0:11:20 | and we progress dialogue collection one exchange of the time |
---|
0:11:24 | so the driver basing interface looks like this |
---|
0:11:27 | you have essentially a task that's generated automatically for the worker |
---|
0:11:31 | and usually provided with the with the dialogue history but because this is the first |
---|
0:11:35 | exchange of the dialogue there's no history again with |
---|
0:11:38 | and then you have the to the worker is passed with essentially progressing the dialogue |
---|
0:11:44 | a single turn |
---|
0:11:47 | on the cars just inside |
---|
0:11:49 | we also provide the history of the dialogue history so far |
---|
0:11:53 | but |
---|
0:11:54 | the car system is actually being asked to use some private collection information that they |
---|
0:11:58 | had access to the user does not have access to and they are then supposed |
---|
0:12:02 | to use then information to also progress |
---|
0:12:04 | the dialog iteratively port exactly what the user ones |
---|
0:12:10 | the dataset ontology |
---|
0:12:12 | has a number of different |
---|
0:12:14 | and three types and associated values across the different domains |
---|
0:12:18 | and i guess that sort of lends itself to a fairly a large amount of |
---|
0:12:22 | devastation types of things that people can talk about |
---|
0:12:26 | what data collection was done we had a little over three thousand dialogues and it |
---|
0:12:30 | was more or less split evenly across the three different domains |
---|
0:12:33 | with an average number of like five or utterances per dialogue as well as |
---|
0:12:38 | nine research tokens per utterance |
---|
0:12:42 | now for some experiments |
---|
0:12:44 | using this data set and the model we propose |
---|
0:12:48 | the baselines that we used for benchmarking our model we're two |
---|
0:12:51 | first we build a sort of traditional rule based system that uses |
---|
0:12:56 | manual rules go to do not going to understanding as well as the naturalness generation |
---|
0:13:02 | and to do all the interfacing with the k p |
---|
0:13:05 | and then on the kind of the neural competitor that we put up against are |
---|
0:13:10 | new model was the copy augment the c to stick model that we could build |
---|
0:13:14 | previously in prior work which at its core is essentially also and encoderdecoder framework |
---|
0:13:20 | with attention |
---|
0:13:21 | kind of background but also daugman's that |
---|
0:13:24 | with an additional copy mechanism over the entities that are the dimension of the dialogue |
---|
0:13:29 | context |
---|
0:13:29 | we chose this |
---|
0:13:30 | because one it is an exact same classifier of models as the new one to |
---|
0:13:35 | we're proposing iac to stick with attention |
---|
0:13:39 | and i guess previous work also shown that this is actually pretty competitive with other |
---|
0:13:42 | model classes including like the intent every network facebook |
---|
0:13:46 | and also because the code was already there so one |
---|
0:13:51 | so i guess for automatic evaluation we had a number different metrics and i'm gonna |
---|
0:13:56 | say this and i'm the bite the bullet that we did provide some sort of |
---|
0:13:59 | automatic evaluation |
---|
0:14:00 | but i guess i know that indicates a dollar especially automatic evaluation is something that |
---|
0:14:04 | is a little tricky to do and in that it really is a little dab |
---|
0:14:07 | it is divisive of a of a topic |
---|
0:14:10 | but there were some object but i guess some people have reported previously so we |
---|
0:14:14 | kinda just follow the line of previous work |
---|
0:14:16 | we use bleu which is of course of data from machine translation and there's some |
---|
0:14:21 | work that says it's actually awful metric |
---|
0:14:22 | no correlation human judgement and then there's some more recent work that says you know |
---|
0:14:26 | it's like it's pretty decent the n-gram basis extraction not really all that had |
---|
0:14:32 | and then we provided in into the f one which basically is a matter of |
---|
0:14:35 | micro-averaged f one over the set of entities that are mentions the response as compared |
---|
0:14:42 | to that in the target response that we're going for |
---|
0:14:46 | so when we hit it all the models against each other we see that |
---|
0:14:51 | first off be the rule based model is doesn't have a particular high blue which |
---|
0:14:54 | again a binary too much but |
---|
0:14:56 | that can simply be explained by the fact that maybe we don't write as many |
---|
0:15:00 | diverse templates for natural image generation |
---|
0:15:04 | but the idea of one is decent in the sense that |
---|
0:15:07 | you know we kind of did target the models which way that was can be |
---|
0:15:09 | pretty accurate of picking out |
---|
0:15:11 | and accommodating search queries |
---|
0:15:14 | the copy network is what had a pretty decently score which |
---|
0:15:18 | can of course be true to the fact that i mean this acoustic models are |
---|
0:15:20 | known to be good language modeling but the mtf one is pretty bad comparatively and |
---|
0:15:25 | this is i guess a function of the fact that |
---|
0:15:28 | essentially the copy number doesn't really making use of the kb directly instead relying totally |
---|
0:15:33 | on dialogue context to generate entities |
---|
0:15:36 | and then the cater to a network outperforms these on the various metrics |
---|
0:15:41 | performed pretty well and lewis wasn't you have one but we also show human performance |
---|
0:15:46 | on this and show that |
---|
0:15:48 | they're still naturally japanese to be filled so well this is encouraging it's not i'm |
---|
0:15:53 | not receive tentative and it's by no means suggest of the fact that quilt models |
---|
0:15:56 | of the one model secure the other |
---|
0:15:58 | but it is kind of it there for coarse grained evaluation |
---|
0:16:03 | we also provide an human evaluation |
---|
0:16:05 | where we essentially generate about a hundred twenty distinct scenarios across the three different domains |
---|
0:16:09 | that we had |
---|
0:16:10 | once the to never before been seen in training or test and then we hear |
---|
0:16:14 | that the different more classes with amt workers in real time and had then conduct |
---|
0:16:19 | the dialogue and then assess the quality of the dialogue based on fluency what we're |
---|
0:16:23 | denis and human likeness on a one to five scale |
---|
0:16:28 | here i mean this kind of this scheme evaluation tends to be a little more |
---|
0:16:32 | but more consistent a little a little more seriously regarded and again it at which |
---|
0:16:36 | you network actually outperforms |
---|
0:16:38 | various and fetters |
---|
0:16:39 | especially getting good gains over the copy network which is which is of course encouraging |
---|
0:16:44 | here again we also have human performance which |
---|
0:16:46 | i mean a sort of sanity check does perform does provide an upper bounds that |
---|
0:16:49 | there still of really large margin between even our best performing system in human performance |
---|
0:16:53 | so |
---|
0:16:54 | the still gap to the to be filled there |
---|
0:16:59 | i just as an example of a dialog one of these scenarios |
---|
0:17:03 | we have here a sort of truncated knowledge base |
---|
0:17:08 | and |
---|
0:17:09 | in each data point of interest navigation |
---|
0:17:11 | setting and |
---|
0:17:12 | we have the driver asking for |
---|
0:17:14 | a gas station with the shortest route from where you are |
---|
0:17:18 | the car answers appropriately |
---|
0:17:20 | you know the driver kind of all samples the next year's gas station the cars |
---|
0:17:24 | is again |
---|
0:17:25 | and string approach really with respect to the knowledge of its given so it's nice |
---|
0:17:29 | to see that there is a reference to the knowledge base and it's handling stuff |
---|
0:17:32 | appropriately |
---|
0:17:35 | some conclusion and kind of final thoughts |
---|
0:17:38 | so the main contributions of the work we're namely that we had this new class |
---|
0:17:42 | of seek to seek style models that is able to perform a look up over |
---|
0:17:47 | the knowledge base in a way that is that is fairly effective |
---|
0:17:52 | and it does this without any slot or belief state tracking which is kind of |
---|
0:17:55 | a nice and nice benefit |
---|
0:17:57 | and it doesn't outperform several the baselines |
---|
0:17:59 | on a number of different metrics |
---|
0:18:01 | and the process we also created new data set of roughly three thousand dialogues in |
---|
0:18:05 | a in a radically domain |
---|
0:18:08 | and i heard a new domain |
---|
0:18:11 | future directions i think one of the main ones a scaling up the knowledge bases |
---|
0:18:14 | so |
---|
0:18:15 | right now word not exactly only the scale of knowledge base of the people would |
---|
0:18:18 | seeing how to relax applications and think that somebody's |
---|
0:18:22 | typical google calendar or |
---|
0:18:25 | any anything of that nature there is always the a disparity in the size of |
---|
0:18:28 | these knowledge bases |
---|
0:18:29 | and so we like to move in the actual feasible realm of types things that |
---|
0:18:33 | people talk about and is the magnitude of types of things people talk about |
---|
0:18:37 | we also like to can move away from |
---|
0:18:39 | operating in the most at each range for gene and it's that kind of do |
---|
0:18:43 | more rl by base things which we accommodate any deviations from typical dialogue tempos that |
---|
0:18:48 | we may see |
---|
0:18:49 | and i guess even further down the line it would be nice to see models |
---|
0:18:52 | that |
---|
0:18:53 | they can actually incorporate more kind of pragmatic reasoning |
---|
0:18:55 | into the kinds of inference is that they're able to make so that simple query |
---|
0:18:59 | like well i need to wear jacketed a the pragmatic reasoning the lousy to say |
---|
0:19:03 | that hey wearing a jacket is indicative of some sort of temperature kind of reason |
---|
0:19:06 | gonna have to do is a bic that also in the model |
---|
0:19:11 | so that that's my presentation thank you be happy taking questions |
---|
0:19:24 | question use |
---|
0:19:45 | i think that that's a great question and i think right now for the predicate |
---|
0:19:49 | this particular iteration of the of the model |
---|
0:19:52 | i think it is |
---|
0:19:53 | relatively dependent on the types of things there talked about because again with the kinds |
---|
0:19:57 | of |
---|
0:19:58 | the entire look up operation is depending on like embeddings and is embedding have been |
---|
0:20:02 | trained right on the appropriate types of a database c and so naturally you're talking |
---|
0:20:06 | about calendar scheduling for you know five hundred dialogues a listen you're talking about you |
---|
0:20:11 | know ponies or something is gonna be hard to have well trained embeddings that are |
---|
0:20:15 | gonna allow you to do that and so i think that certainly |
---|
0:20:18 | this is something that is a subject of future work and i can think of |
---|
0:20:22 | likes some ways you know using pre-trained embeddings mail it kind of circumvent the need |
---|
0:20:25 | to literally train scratch again and kind of bootstrap a little bit more other kinds |
---|
0:20:30 | of things you expect to see i think it's a regression deaf and something to |
---|
0:20:33 | spoken further |
---|
0:20:44 | and thank you for your presentation i just want to our during the experiment and |
---|
0:20:49 | the training process |
---|
0:20:51 | and the testing as well so i'll do you deal with unseen situations you know |
---|
0:20:56 | if you show you can see our knowledge like used to meet |
---|
0:21:01 | used in the nation's you in that and are talking to deal with that sorry |
---|
0:21:05 | how can i do anything to deal with the situation you the task |
---|
0:21:10 | also |
---|
0:21:11 | in what particular sensor like you're talking about |
---|
0:21:15 | what exactly so it's like if something that is entirely different what you've seen before |
---|
0:21:19 | all maybe like just |
---|
0:21:21 | some kind of like you just |
---|
0:21:24 | like new p-values show you the task force not change |
---|
0:21:29 | i mean i think in this case |
---|
0:21:32 | it would have to be augmented a little bit more with some sort of a |
---|
0:21:34 | copy mechanism by you over the |
---|
0:21:38 | i mean i guess in this case |
---|
0:21:40 | it is a little bit dependent on the kinds of things that it's seen |
---|
0:21:44 | and i think that |
---|
0:21:47 | i think that in general there have to be done |
---|
0:21:50 | through |
---|
0:21:52 | i mean right now is able to pattern only having seen these entities in training |
---|
0:21:55 | as well |
---|
0:21:57 | i think in general it's something that which kind of look at how they can |
---|
0:22:00 | be done in a way that is |
---|
0:22:02 | less dependent on the keys the keys as they and demonstrate and i think right |
---|
0:22:06 | now it would probably come out as in which people a little difficult to handle |
---|
0:22:09 | but |
---|
0:22:10 | but solution to |
---|
0:22:23 | last point that you had your site some future direction it just structure knowledge addition |
---|
0:22:30 | right information system is that you can perform reason you can you can recently you |
---|
0:22:36 | can probably be you have any and you like to incorporate that with the reading |
---|
0:22:40 | and twenty five |
---|
0:22:41 | right you mean allowing for these kinds of more complex styles of reasoning without |
---|
0:22:47 | i mean that so that's a really good points and |
---|
0:22:50 | i think the last one especially is right now a little bit of a long |
---|
0:22:53 | time i mean in the sense that |
---|
0:22:56 | even though it's and the kinds of things that are common it still is something |
---|
0:23:01 | like that more less falls into the |
---|
0:23:03 | one particular type of pattern with the slot you can well as the land and |
---|
0:23:06 | act on that |
---|
0:23:07 | i think that |
---|
0:23:09 | right now the model would what's troubles would be famous probably with this kind of |
---|
0:23:13 | things that obviously deal with like you know synonyms kinds of various like the use |
---|
0:23:18 | of speech et cetera |
---|
0:23:19 | and i don't have like a super gonna answer for with that would look like |
---|
0:23:22 | because the model is very much of this like |
---|
0:23:26 | slot filling but i think that |
---|
0:23:28 | i think the interplay of chitchat systems and the kind of the more structure information |
---|
0:23:33 | is one that should definitely be explored more we can i think that you know |
---|
0:23:36 | really touched on that a lot as well |
---|
0:23:41 | and you speaker |
---|