0:00:15 | so thank very much for the introduction |
---|
0:00:18 | i'm going to talk about our recent work |
---|
0:00:20 | at cambridge university |
---|
0:00:24 | current dialogue models |
---|
0:00:26 | are limited in the complexity of dialogue structure as they allow for example relations between |
---|
0:00:32 | objects |
---|
0:00:34 | i'm that'll work |
---|
0:00:35 | we |
---|
0:00:36 | raquel |
---|
0:00:38 | we present work as well |
---|
0:00:41 | so it all a |
---|
0:00:44 | deliberately break those limitations |
---|
0:00:47 | and |
---|
0:00:48 | we're not all of the current to have complex relations between a |
---|
0:00:52 | complex data structures but you don't x |
---|
0:00:56 | and even more so we i name system to address |
---|
0:01:01 | relations between objects and we allow |
---|
0:01:03 | users to address relations between objects |
---|
0:01:08 | and even though this model has been |
---|
0:01:10 | developed for some dp based task oriented spoken dialogue systems |
---|
0:01:15 | it should be applicable for to all kinds of data systems by you want to |
---|
0:01:19 | have a symbolic representation of the dialog state |
---|
0:01:22 | at first the creek we can on and if you based |
---|
0:01:26 | spoken dialogue systems |
---|
0:01:27 | so what is here is the architecture of a spoken dialogue system |
---|
0:01:32 | we have the input processing |
---|
0:01:33 | modules which yields a semantic representation of the |
---|
0:01:38 | of the input of the user |
---|
0:01:40 | it is then used in the belief tracking |
---|
0:01:42 | module to update the |
---|
0:01:44 | the belief state which is then used by the dialogue policy to use the next |
---|
0:01:48 | system action |
---|
0:01:50 | which is then |
---|
0:01:51 | transfer back |
---|
0:01:52 | by the language generation and synthesis module |
---|
0:01:55 | two response or you response the user |
---|
0:01:58 | and the task of each tracker is then to update the |
---|
0:02:01 | the distribution of all possible dialog state based on the current user input the previous |
---|
0:02:06 | system action and the previously stated |
---|
0:02:09 | and |
---|
0:02:11 | you using force them to find optimal policy which maps the current belief state index |
---|
0:02:15 | system action using a reward signal from the user |
---|
0:02:20 | and the optimal policy is then defined as |
---|
0:02:22 | a policy which maximizes the expected future calculated reward |
---|
0:02:28 | this architecture has been used for |
---|
0:02:30 | so you go domain dialogues a lot |
---|
0:02:33 | and also for multi domain dialogues |
---|
0:02:36 | and |
---|
0:02:38 | using the multi-domain dialogue model in psychology |
---|
0:02:40 | here's an example dialog |
---|
0:02:43 | about finding a little interested in cambridge |
---|
0:02:48 | so |
---|
0:02:49 | you saw what does one soldier in cambridge use us a few questions about that |
---|
0:02:54 | price of instances are aligned and the system table two |
---|
0:02:57 | give a response |
---|
0:02:59 | k is right cambridge is expensive fortune the sensor |
---|
0:03:03 | he's is quite a also on the restaurant in cambridge |
---|
0:03:06 | in the centre |
---|
0:03:08 | system continues asking questions to figure out what the actual |
---|
0:03:13 | the rest of the lexicon meta-file the rest from the user wants and then it |
---|
0:03:17 | is okay chromacity trying to system |
---|
0:03:19 | in the sense |
---|
0:03:21 | and |
---|
0:03:22 | in the multi-domain dialogue model |
---|
0:03:24 | the user input first n is the main tracker which you can start okay what's |
---|
0:03:28 | the current domain of the of the dialog the user input |
---|
0:03:31 | this case hotel restaurant |
---|
0:03:33 | and then they are |
---|
0:03:35 | independent |
---|
0:03:36 | part of the dialog pipeline which handles than the request buddies |
---|
0:03:42 | and this model has several as a dialog state |
---|
0:03:45 | stuff that all |
---|
0:03:47 | the communication between those two domains |
---|
0:03:49 | or at least doesn't allow easy |
---|
0:03:51 | way of doing this type of communication |
---|
0:03:53 | so for example the user would say trade it also effective referent |
---|
0:03:59 | in the same area which is a relation relating to information about hotel this was |
---|
0:04:05 | would not be something this type of |
---|
0:04:08 | a lot of attention |
---|
0:04:11 | so instead of |
---|
0:04:13 | of modeling domain |
---|
0:04:16 | we are looking at entities that |
---|
0:04:18 | so |
---|
0:04:19 | the same |
---|
0:04:21 | example dialogue |
---|
0:04:23 | in the beginning |
---|
0:04:25 | you the dialogue was about |
---|
0:04:27 | a not sect of type hotel |
---|
0:04:30 | and then later about object of interest ones |
---|
0:04:34 | and |
---|
0:04:35 | also and then entities in our dialogue topic entities |
---|
0:04:38 | and |
---|
0:04:39 | that are also contains an additional ldc the relation which is modeled as a separate |
---|
0:04:43 | entity connecting |
---|
0:04:45 | both |
---|
0:04:46 | a text holes in the restaurant |
---|
0:04:51 | and this is |
---|
0:04:52 | the main idea of the new dialogue model |
---|
0:04:55 | so a and all these all these energies countryside centre conversation world |
---|
0:05:02 | which kind of wine sent again |
---|
0:05:06 | so i will |
---|
0:05:08 | go in more detail on but let me tell you more details about those the |
---|
0:05:13 | dialogue model |
---|
0:05:15 | and then either so you present your prototype implementation a showcase that we focus on |
---|
0:05:21 | relation modeling |
---|
0:05:22 | if you can so that |
---|
0:05:23 | this has dialogue model |
---|
0:05:25 | is actually very effective in handling relation |
---|
0:05:28 | between opt |
---|
0:05:31 | okay but first we talk about the conversation into the dialogue model in more detail |
---|
0:05:36 | what are optics tex mex and its use of the real world |
---|
0:05:40 | we have an object in cambodia the extra restaurant |
---|
0:05:43 | and all the objects in heaven the matching |
---|
0:05:46 | and you know database |
---|
0:05:48 | and based on these entries |
---|
0:05:50 | actual objects in our conversational world |
---|
0:05:54 | created |
---|
0:05:55 | and |
---|
0:05:56 | of course to notice do that the type of that |
---|
0:05:59 | also has to match |
---|
0:06:01 | the database specification |
---|
0:06:06 | and a relation then connects to objects |
---|
0:06:10 | no example the restaurant again |
---|
0:06:13 | and a different definition of this relation can also be derived from |
---|
0:06:17 | the type definitions of those objects |
---|
0:06:19 | so in this example |
---|
0:06:22 | we just compared compatible attributes like area areas |
---|
0:06:26 | present in both |
---|
0:06:28 | it would list of the |
---|
0:06:30 | object type |
---|
0:06:32 | so this is |
---|
0:06:33 | this can be something they can connect on for this is part of the relation |
---|
0:06:36 | to finish |
---|
0:06:38 | of course you can elaborate is if you |
---|
0:06:40 | she one |
---|
0:06:44 | and |
---|
0:06:45 | each conversation entity which can be adopted or relation maintains a user goal belief state |
---|
0:06:51 | and the context eight |
---|
0:06:53 | so whether it's an object or a simple all have their own |
---|
0:06:57 | own state definition |
---|
0:06:59 | and the user goal belief represents information |
---|
0:07:02 | shared by the user bits |
---|
0:07:03 | as a set of uncertainty whether and |
---|
0:07:08 | so it's not as prepared probability distribution over all possible user goal states |
---|
0:07:12 | as stated she's |
---|
0:07:14 | similar to what we already know |
---|
0:07:16 | from completely based dialogue modeling |
---|
0:07:18 | in general |
---|
0:07:20 | and the connect state represents information which is offered by the system |
---|
0:07:26 | so for example the system says |
---|
0:07:28 | well |
---|
0:07:30 | on the houses a chinese restaurant in the sensor |
---|
0:07:32 | and this would be something which would be in the context and so the system |
---|
0:07:36 | knows that it has access information does not uncertainty associated to that |
---|
0:07:40 | and the clicks it is |
---|
0:07:42 | very important because |
---|
0:07:44 | braces of refer to the information we just been shown by the system instead of |
---|
0:07:48 | and so the information |
---|
0:07:50 | rather than a computer opticals been found yet |
---|
0:07:54 | and it is it of conversational world and their entities |
---|
0:07:58 | allows them to the belief tracking on two levels |
---|
0:08:02 | on the word level there you can also have a behavioural |
---|
0:08:05 | a system behavior for creating and entity disambiguation |
---|
0:08:10 | and then on the energy level they actually |
---|
0:08:14 | for about what specific to gonna can forward |
---|
0:08:18 | protesters this property stuff this relation actually have |
---|
0:08:26 | okay so |
---|
0:08:27 | do not confuse you anymore i have as a small example but we can go |
---|
0:08:31 | through step by step |
---|
0:08:34 | the sex on the t c them before so shouldn't be too difficult the beginning |
---|
0:08:39 | the user says okay and looking for hosting cambridge whether the system can infer okay |
---|
0:08:43 | that must be an object |
---|
0:08:45 | of type or tell |
---|
0:08:48 | and we don't have any information about it yet |
---|
0:08:50 | then the other continues there's more information share |
---|
0:08:54 | and that should be the price range expensive so we cannot tell the user goal |
---|
0:08:58 | belief state and the system share |
---|
0:09:01 | the most video what the proposed |
---|
0:09:02 | and so what cambridge's an expensive one |
---|
0:09:06 | so this is when the context it is updated |
---|
0:09:10 | then a user says greater also need cheap restaurant in the scenario |
---|
0:09:14 | which allows the dialogue system to infer that there's another object of time restaurant |
---|
0:09:20 | and that's a relation between a topic |
---|
0:09:23 | so this is when this could come to live if you will |
---|
0:09:29 | the data continues personal information acquired about the |
---|
0:09:32 | but it's fun and two |
---|
0:09:35 | system proposes a computer us plan to the user |
---|
0:09:38 | which is when the context and the of the object and the relation is updated |
---|
0:09:47 | so |
---|
0:09:50 | if you to the properties of this of this |
---|
0:09:53 | you dialogue model |
---|
0:09:56 | and compared to the most of them are not only can see on several expected |
---|
0:09:59 | it is more or flexible and the |
---|
0:10:03 | so for example objects can be can be addressed by the use of the system |
---|
0:10:07 | is something |
---|
0:10:08 | also in common in a way but this new model also allows |
---|
0:10:12 | to have multiple object of the same time so you could have put model dialogue |
---|
0:10:16 | where you have |
---|
0:10:17 | two hotels |
---|
0:10:20 | all |
---|
0:10:21 | three best friends or whatever you think of |
---|
0:10:23 | and were to be able to maintain a dialog states |
---|
0:10:27 | correctly and you could also have relations between those object even |
---|
0:10:31 | it also allows the type hierarchy so |
---|
0:10:33 | the kids okay the rest when the venue and these things and this might have |
---|
0:10:37 | given you wanted to policy modeling for example |
---|
0:10:41 | and |
---|
0:10:42 | well then let the calm assessment of the dialogue model also can do is modeled |
---|
0:10:46 | relation |
---|
0:10:47 | it's can be addressed by using the system |
---|
0:10:50 | poses al |
---|
0:10:53 | state it the relation to have their own state |
---|
0:10:56 | this allows the system to actually talk about those relations and address time in a |
---|
0:11:01 | system response |
---|
0:11:03 | and also the relations remain specified |
---|
0:11:06 | even if the context of the object |
---|
0:11:08 | change or if there's |
---|
0:11:09 | no object context is if i don't all |
---|
0:11:11 | so if i if you think of the dialog the user but at the beginning |
---|
0:11:15 | i'm looking for a whole to and the rest and in the same area |
---|
0:11:19 | there is no connected all the scenario wouldn't |
---|
0:11:22 | couldn't be derived any and |
---|
0:11:26 | in any other way then |
---|
0:11:28 | like doing in the like having like this separate entities modeling the rest |
---|
0:11:37 | so this is |
---|
0:11:38 | well i'm going to tell you about the |
---|
0:11:41 | how about how the model |
---|
0:11:43 | where |
---|
0:11:47 | how to continue the how to cook something to treat example of how we have |
---|
0:11:51 | implemented this model |
---|
0:11:53 | and |
---|
0:11:56 | for experiments on |
---|
0:11:58 | relation learning something that this is very powerful tool to actually |
---|
0:12:02 | do this |
---|
0:12:07 | so one idea of formal beyond us to do when we do implement a portable |
---|
0:12:11 | to focus on relation labeling and to reuse |
---|
0:12:14 | the bound on the conditions you already have for belief imposing modeling |
---|
0:12:18 | we have from the domain based models |
---|
0:12:22 | so we have a fixed position world |
---|
0:12:25 | maybe africa will one object of each time |
---|
0:12:28 | and there's only one conversational taking the focus of attention |
---|
0:12:32 | which is which helps the system to figure out how to continue the dialogue |
---|
0:12:38 | and is similar to the to the |
---|
0:12:40 | the main tracking problem |
---|
0:12:43 | the user goal belief is more of the marginal distribution for slot |
---|
0:12:47 | and this is |
---|
0:12:49 | but also has the effect that you can reuse |
---|
0:12:53 | step something on the conditions for quality modeling for example but also |
---|
0:12:57 | but the only possibly relation you can |
---|
0:12:59 | actually model is the constellation this type of |
---|
0:13:02 | from you state |
---|
0:13:08 | another to open questions |
---|
0:13:11 | but dialogue management |
---|
0:13:12 | how can be incorporated the conversational objects and relations into the decision making process |
---|
0:13:17 | and how can we derive an input stage |
---|
0:13:19 | to the dialogue policies from the conversational entities |
---|
0:13:24 | and |
---|
0:13:26 | we implemented a few reinforcement learning approach for that |
---|
0:13:30 | because this is very |
---|
0:13:32 | useful as it |
---|
0:13:34 | have hierarchical structure and it kind of |
---|
0:13:38 | subdivide the probably the several steps |
---|
0:13:42 | in the beginning in the based on the focus of attention an object is basically |
---|
0:13:46 | a |
---|
0:13:47 | selected whether it's also respond and then there's a master policy a basis master policy |
---|
0:13:52 | a decision is made but also talk about the art techniques to talk about a |
---|
0:13:56 | relation place of that object |
---|
0:13:59 | only after that the system |
---|
0:14:01 | and the second bullet is able |
---|
0:14:03 | about ready to the object |
---|
0:14:05 | or to the relation |
---|
0:14:08 | so this means that each conversation and he has the all and policy |
---|
0:14:13 | and is also allows to say a relation policies |
---|
0:14:16 | because this policy is the same value |
---|
0:14:19 | come from this |
---|
0:14:20 | so i talking about a the whole circle the output from the start talking about |
---|
0:14:23 | the rest |
---|
0:14:26 | and information we use |
---|
0:14:27 | two |
---|
0:14:29 | to |
---|
0:14:30 | for the policy as input to do this |
---|
0:14:32 | is on the entity level |
---|
0:14:35 | for the relation we just look at the state of the relation to but for |
---|
0:14:39 | you all checked we have to find a way to combine those things |
---|
0:14:44 | which is necessary to database access basically |
---|
0:14:48 | and the last odyssey it also we need a combination of the objects in the |
---|
0:14:51 | relation |
---|
0:14:52 | as you want to do computer solution |
---|
0:14:54 | because |
---|
0:14:57 | the object you please state of the object |
---|
0:14:59 | could state something which is |
---|
0:15:01 | it can stick to what the relation says based on what object to have a |
---|
0:15:05 | specified |
---|
0:15:06 | and to figure out those conflicts and to allow the system to handle this situation |
---|
0:15:12 | you need to combine them |
---|
0:15:18 | we consummate experiments |
---|
0:15:20 | of finding out and the rest and in cambridge |
---|
0:15:24 | for which we use the pilot system |
---|
0:15:28 | and the conversational world and in the six and experiments |
---|
0:15:31 | consisted of two objects multiple do not have restaurant |
---|
0:15:35 | and |
---|
0:15:37 | user goals contain the rage |
---|
0:15:39 | always |
---|
0:15:40 | between the slot area pricerange so |
---|
0:15:44 | every user goal had this relation but was not always use |
---|
0:15:48 | and there was an additional parameter are which specify the property that the usual you |
---|
0:15:53 | use a similar to actually use that relation |
---|
0:15:56 | so the you using lattices and has may consist of a lot to talk about |
---|
0:16:02 | the value of all that's also various sensor all the talk about the relation area |
---|
0:16:06 | of course that you "'cause" every restaurant |
---|
0:16:09 | and if are outside only use a simulated user would say area |
---|
0:16:14 | every hold because every restaurant |
---|
0:16:17 | and we'll of the two experiments |
---|
0:16:19 | comparing the |
---|
0:16:21 | conversational entity dialogue model with them multi-domain dialogue model |
---|
0:16:25 | in the first we were interested of the effect of r |
---|
0:16:29 | does this happen in fact we actually you need to model relation can you can |
---|
0:16:34 | be able to cope with this problem different way |
---|
0:16:36 | for this at a fixed all of conversational czech only the second one was learned |
---|
0:16:41 | and then we also ones of you got to see what the actual effect a |
---|
0:16:45 | of this type of model on the on the learning behaviour in general |
---|
0:16:52 | so we sell |
---|
0:16:53 | experiment it's in experiment one about the unknown to different semantic error level zero percent |
---|
0:16:59 | fifteen percent |
---|
0:17:00 | you see the success rate |
---|
0:17:02 | and the other relation probability are |
---|
0:17:06 | and |
---|
0:17:07 | for c represent you could clearly see that with the high our |
---|
0:17:12 | the composition is the other one of the form to go if you significantly higher |
---|
0:17:17 | and then the multi-domain dialogue model |
---|
0:17:20 | and this is also true for the fifteen percent |
---|
0:17:23 | case the goddess much lower here because of how the |
---|
0:17:26 | our remote operations |
---|
0:17:28 | it is artificial because assimilated but based on that could call some of the |
---|
0:17:32 | problems you would usually have |
---|
0:17:35 | in this situation |
---|
0:17:36 | note interesting here is not only that the that the concession digit our model |
---|
0:17:41 | performs better |
---|
0:17:42 | for high are also that follow lost it does not |
---|
0:17:47 | so the model domain data model is able to cope with |
---|
0:17:50 | with use a talk about relation to a certain extent |
---|
0:17:54 | touched by basically repeating the question |
---|
0:17:57 | so you looking for both what prices are looking for |
---|
0:18:00 | yes n is also a classifier for |
---|
0:18:03 | t |
---|
0:18:04 | if it does restate a question |
---|
0:18:06 | this is the model can learn all the call this |
---|
0:18:09 | i'm going to a certain extent and on in terms of success rate or re |
---|
0:18:12 | well which |
---|
0:18:14 | goes the we want |
---|
0:18:17 | but i would assume that if you look good |
---|
0:18:19 | user to sex anything's it might be different |
---|
0:18:23 | and |
---|
0:18:24 | we also looked at how many of the system actions actually address |
---|
0:18:29 | and |
---|
0:18:30 | how many i'd how mean dialogue system the system actually interest |
---|
0:18:35 | and the relation and then we the system utterances in twenty four point five dialogues |
---|
0:18:41 | for twenty four point five percent of the dialog directly address a relation which i |
---|
0:18:45 | think it is very interesting |
---|
0:18:47 | and for experiment to a we have a similar situation |
---|
0:18:52 | it's not the strong because they took turns and effects of the domain |
---|
0:18:57 | if the rest know how to most part of the first |
---|
0:19:00 | first object of the dialogue was about then it |
---|
0:19:05 | there was a relation present |
---|
0:19:08 | so |
---|
0:19:10 | i think we can say that the new model breaks up the limitations of current |
---|
0:19:14 | our models |
---|
0:19:15 | and |
---|
0:19:16 | i think the not next logical step towards more complex dialogs |
---|
0:19:21 | a lot also more complex data structures like relations between objects |
---|
0:19:25 | and that could shown approach have implementation which we use as normal voice models that |
---|
0:19:30 | it outperforms the baseline |
---|
0:19:31 | and the future work |
---|
0:19:33 | we further need to increase |
---|
0:19:35 | the complexity of dialogue structures |
---|
0:19:37 | so what's more complex relation |
---|
0:19:40 | and |
---|
0:19:41 | what ended if it had to be ways and just to name only two |
---|
0:19:45 | so you can you can find implementation on the website |
---|
0:19:49 | and also we have a new meetings from high to a so if you just |
---|
0:19:52 | that if you to assign a thank you very much |
---|
0:20:01 | okay |
---|
0:20:03 | now we have time for questions that one |
---|
0:20:37 | so you mean if it if there is like |
---|
0:20:41 | just multiple talk about multiple objects |
---|
0:20:44 | and you say the for the third object or whatever this have been the same |
---|
0:20:48 | area |
---|
0:20:50 | and then difficulties with this rule a strong gonna |
---|
0:21:09 | okay |
---|
0:21:10 | and i mean this depends on how the ontology is modeled |
---|
0:21:13 | i mean in this example dialogues we |
---|
0:21:15 | have you have a simple way of modeling the area which is not so please |
---|
0:21:18 | press centre |
---|
0:21:20 | i think |
---|
0:21:21 | for doing these things you would have to |
---|
0:21:23 | have a different ontology is gifted different |
---|
0:21:27 | of locating these things |
---|
0:21:29 | and then you could if you model that i love this situation |
---|
0:21:45 | also so you so you mean about the relation refers to the to what the |
---|
0:21:49 | user request of the actual without loss |
---|
0:21:52 | okay |
---|
0:21:55 | i mean it's setup currently |
---|
0:21:58 | and they said to try to go back to that slide |
---|
0:22:03 | i said relations of references a context a feature space is what the system proposed |
---|
0:22:07 | so whatever specified for by user |
---|
0:22:10 | is not necessary take into account when you look at the relation |
---|
0:22:12 | the relation looks at what the system |
---|
0:22:15 | also there's an entity whatever the values |
---|
0:22:17 | it's basically means if you haven't talk about the area at all |
---|
0:22:21 | then you said i want something the same area then you would be able to |
---|
0:22:24 | refer to that while the |
---|
0:22:26 | system offered to the user as a result it is basically what you're saying |
---|
0:22:30 | you want something which relates to what the system of a global the users admin |
---|
0:22:34 | in general terms |
---|
0:22:38 | and i think this is why it it's important to have this as two separate |
---|
0:22:42 | the possible state |
---|
0:22:45 | okay so we have a question i think you where the first |
---|
0:23:22 | i mean this |
---|
0:23:23 | so the idea of this one is to estimate the model information just circumvention values |
---|
0:23:28 | of theirs and frequency about that |
---|
0:23:30 | then a |
---|
0:23:32 | you have to add things to the model a question is whether the model can |
---|
0:23:35 | that is actually able to model these things and allows the system to address these |
---|
0:23:40 | type can be greedy |
---|
0:23:42 | and this was the goal of this i mean |
---|
0:23:44 | it cannot solve everything because they would that just have some |
---|
0:23:46 | some |
---|
0:23:48 | the reasoning going on maybe or whatever out to be derived information |
---|
0:23:52 | or you could add some something to the language |
---|
0:23:56 | to the understanding component or whatever to do this type of thing |
---|
0:24:02 | if you and but if the system would be unsure about what the correct way |
---|
0:24:06 | of getting the situation would be you put |
---|
0:24:09 | the model could the policy learnt how to handle that and i and aspect of |
---|
0:24:13 | whatever |
---|
0:24:14 | okay let's move the source it's you |
---|
0:24:40 | yes |
---|
0:24:49 | yes |
---|
0:24:51 | well from my point of view rubs of nh |
---|
0:24:56 | referred to dialogue intentions of dialogue act so you something you want to alter relations |
---|
0:25:01 | and i think you could |
---|
0:25:04 | have those of the object type definitions if you will to extend to have like |
---|
0:25:09 | what specific actions you could actual dollars |
---|
0:25:12 | and |
---|
0:25:13 | since you're doing however a multimodal and belief tracking or also policy decision you could |
---|
0:25:22 | if more general we have to this on the on the upper level and then |
---|
0:25:24 | a more fine grained way of doing this on the lower level which would allow |
---|
0:25:27 | you to handle these |
---|
0:25:28 | these things |
---|
0:25:30 | i would be my files the correctly with the status of errors |
---|
0:26:08 | it's hard to say i mean i think they are computationally is used to that |
---|
0:26:11 | it's i think that's just what to basically referring to |
---|
0:26:14 | and i think you go you can also line i think it depends on the |
---|
0:26:18 | smart |
---|
0:26:19 | way of modeling the |
---|
0:26:21 | you stated so the probability distribution i mean is very simple way of doing the |
---|
0:26:26 | probability distribution and you want to have |
---|
0:26:29 | more complex weights maybe you won't have hierarchical one and you have to figure out |
---|
0:26:32 | how to do these five |
---|
0:26:35 | it doesn't |
---|
0:26:36 | but strict the general idea of modeling the dialogue and that way |
---|
0:26:41 | so |
---|
0:26:43 | would still allow you to define relations would still allow you to talk about those |
---|
0:26:47 | relations and i still think is a good idea to have those relations because this |
---|
0:26:50 | is |
---|
0:26:51 | from my point of view the only way that the system is also able to |
---|
0:26:54 | talk about those relations |
---|
0:26:56 | but where how far this thing going it's hard to say because i don't know |
---|
0:26:59 | how the new model to look like |
---|
0:27:03 | no |
---|
0:27:08 | i mean one thing i would i |
---|
0:27:10 | the next that i would go to also be that have a probability of what |
---|
0:27:14 | different types of relation |
---|
0:27:16 | and also on the entity level which would no longer to have also things like |
---|
0:27:21 | on one |
---|
0:27:22 | this all that |
---|
0:27:23 | i one |
---|
0:27:25 | not this and how to do these things and i think this is what you |
---|
0:27:28 | could do if you have like with the distribution operations sort of just the values |
---|
0:27:33 | that last |
---|
0:28:12 | but also what the chase them differently |
---|
0:28:17 | without looking to lead to a given more for even good on something that |
---|
0:28:22 | okay thank you |
---|