0:00:18 | thank you mister chairman for the introduction |
---|
0:00:21 | uh my name is because the company and i'm just assuming that it's and university |
---|
0:00:27 | and this is a beautiful campus assumes both so you can um |
---|
0:00:32 | well i wanna thank you for being here at the last the presentations and i'm |
---|
0:00:37 | going to talk about the sampling pattern you i'm going to introduce this model in |
---|
0:00:42 | a brief and clear way i hope to be useful at christmas from you and |
---|
0:00:47 | this has been a work together we um but clean vocals and what ensure strong |
---|
0:00:54 | well |
---|
0:01:01 | first i would like to thank our funders which supported us to all this work |
---|
0:01:05 | and this will be the conference i'm going to talk about first i will give |
---|
0:01:09 | a little bit about the background the motivation of the work and um then i |
---|
0:01:14 | will introduce the model |
---|
0:01:17 | the spc model reviewing the school and the definitions and how we generate this pc |
---|
0:01:23 | applied to some known cases and we will see if it is good enough for |
---|
0:01:29 | us to extract features of interest and then i will compute the work together with |
---|
0:01:35 | some hints of the future work |
---|
0:01:38 | i'd like to mention that a group in a mixture then university's name to realistic |
---|
0:01:43 | three D and we are more or less interested in a three D information movies |
---|
0:01:50 | images or video and we kind of have the whole chain of a three D |
---|
0:01:55 | starting from the capturing processing transmission and the post processing and adaptation to the display |
---|
0:02:03 | and the viewing experience so we cover the whole chain but this will be about |
---|
0:02:08 | the capturing where and when we talk about capture we mostly mean cameras and parameters |
---|
0:02:14 | related to them so we will talk about cameras in this presentation and um when |
---|
0:02:21 | we say camera there that can display now configurations about them you're familiar with some |
---|
0:02:26 | of them porsche that all of the maybe the snr cameras or very available we |
---|
0:02:32 | know about them but for specific applications such as really capturing or different kind of |
---|
0:02:38 | capturing is we are going usually to um unconventional camera setups and it's important to |
---|
0:02:44 | be able to model them you know the level that the light and to be |
---|
0:02:47 | able to extract parameters with low complexity at the same time would level of description |
---|
0:02:54 | and for example here this setup is uh used in and is it is upon |
---|
0:03:01 | and um well they use this setup which is a camera here and the lens |
---|
0:03:06 | every yeah to capture a digital photography and this is a very famous camera array |
---|
0:03:13 | setup maybe we have signal before and this is a different camera setup you wanna |
---|
0:03:20 | should diversity and the famous like true or maybe it's assumed to be paying the |
---|
0:03:24 | slideshow camera which is a plenoptic camera so there are different configurations and uh things |
---|
0:03:29 | about them and we would like to have a model to be able to extract |
---|
0:03:35 | parameters |
---|
0:03:36 | like what we have here um i want to say that there are different parameters |
---|
0:03:44 | related to one camera and um i haven't seen that kind of map or kind |
---|
0:03:50 | of measure to be able to put them in a scale and to be able |
---|
0:03:55 | to compare demand for example say camera one is better than camera to in that |
---|
0:04:00 | sense and if you do this to the camera and then these parameters are changed |
---|
0:04:03 | in this way and that kind of behavioural at the same time a descriptive information |
---|
0:04:09 | about the camera system and what is a camera i mean the camera system can |
---|
0:04:14 | do and how is this so different setups for the camera and these are usually |
---|
0:04:19 | parameters of interest for different applications and which i have pointed out here are more |
---|
0:04:25 | or less related to the focal properties of the camera and i will come to |
---|
0:04:28 | this point later for example for uh an application that angular resolution in a certain |
---|
0:04:35 | plane is more important for us maybe we can extract parameters using the model and |
---|
0:04:40 | see that camera to which is um shown by right column here is better for |
---|
0:04:46 | this application at this is space from the object or |
---|
0:04:51 | not able anyhow to compare camera one and two and come to a conclusion which |
---|
0:04:55 | one to choose for what much modification to apply to the camera to get a |
---|
0:04:59 | better result so maybe remarks about that work we are doing it is to keep |
---|
0:05:04 | the complexity of the model low at the same time to give it a high |
---|
0:05:07 | descriptive network which um can be used for extracting features or modeling the system |
---|
0:05:14 | yeah that is widely used and uh i've seen many to be instance here is |
---|
0:05:20 | for example there may be more models but these are too difficult examples one of |
---|
0:05:24 | the ray-based model which is um considering light as the light rays for names and |
---|
0:05:30 | the familiar with the two plane representation and having one point in each plane and |
---|
0:05:36 | the line connecting these two points are considered as the right and we call with |
---|
0:05:40 | this description and the method is usually approximated which we consider the light and on |
---|
0:05:46 | the right angle to the optical axis is a small enough to apply some uh |
---|
0:05:51 | approximations and it is widely used in different applications such as ray tracing photography microscopy |
---|
0:05:59 | or telescopes and um they are all familiar with this model a more comprehensive model |
---|
0:06:05 | more complex model is the wave optics model which is the using the light and |
---|
0:06:11 | electromagnetic wave and the method which is working with this electromagnetic waves or |
---|
0:06:17 | usually starting from maxwell equations and harmonic waves and fourier theory and it is able |
---|
0:06:23 | to uh explain more properties well at the expense of more complexity and all |
---|
0:06:31 | we are going to |
---|
0:06:34 | somehow interim something between these two models and the scope of the work we are |
---|
0:06:39 | uh doing is well oh |
---|
0:06:42 | only in it is that it will be a geometry based model and it will |
---|
0:06:47 | exclude anyhow the wave optics at least at the se stage and it applies to |
---|
0:06:52 | the optical capturing system which can be as i said conventional cameras or new setups |
---|
0:07:00 | so the motivation of the work is to have a framework for modeling complex capturing |
---|
0:07:05 | systems and we uh expect that this model providers kind of two words to be |
---|
0:07:10 | able to extract properties from the system and at the same time can be keeping |
---|
0:07:15 | in mind low complexity and the highest level of the model |
---|
0:07:20 | so basically what the model can be applied to different camera setups and we generate |
---|
0:07:26 | the spc using tools or mathematics for geometry we have and well i was trying |
---|
0:07:33 | to show something like the spc model which is the sampling pattern cube so and |
---|
0:07:38 | put a Q and put this might samples inside which are in the form of |
---|
0:07:41 | the light containers which are introduced to and from this model we are extracting features |
---|
0:07:48 | i |
---|
0:07:49 | well this model is helpful for visualising purposes also and also describing the sampling behavior |
---|
0:07:54 | of the system |
---|
0:07:57 | there can be wide applications for these models first of all study and design of |
---|
0:08:02 | the capturing system can you want application and uh investigating system variations if we have |
---|
0:08:07 | a system and the at some parts with or with very the distances are properties |
---|
0:08:11 | of the system how it is reflected the sampling behavior of the system i |
---|
0:08:17 | the one i pointed out at the second or third the slide it is investigated |
---|
0:08:22 | inter camera records which one is better in what sense for this application or you |
---|
0:08:27 | have to compare it to different camera setups and one possible application can be adaptation |
---|
0:08:33 | of the post processing algorithms on which i will give some more about |
---|
0:08:40 | well in this sampling pattern cube i'm talking about it is uh there is a |
---|
0:08:44 | very simple idea and this is um originating from the light samples in this model |
---|
0:08:51 | light samples in this model are in the form in the form of light containers |
---|
0:08:55 | and uh we can put it along like tracing the rain model and what is |
---|
0:09:01 | special about like containers is that they are focus light there are formal focused like |
---|
0:09:07 | so there is a point which all by phrase or one of the light rays |
---|
0:09:12 | are passing to read point and we call this point at the position of the |
---|
0:09:16 | light container and there is an angular span associated with the lights container and in |
---|
0:09:21 | this representation we have four angles associated with it but this is a representation at |
---|
0:09:29 | this estate so a light container which we usually um once the bases in the |
---|
0:09:36 | slice coming next as the tip position and angular span and information is somehow the |
---|
0:09:43 | vocal properties of the system are somehow coded in this all samples and |
---|
0:09:51 | well the light |
---|
0:09:53 | containers will then produce the sampling pattern cube which i show but you hear and |
---|
0:10:00 | is a small like containers inside distributed inside the um Q and so we can |
---|
0:10:07 | say that the sampling pattern cube is a set of this slide container and i |
---|
0:10:11 | will show how to generate again how to use it so basically we all we |
---|
0:10:16 | had a camera and there is the space between the space in front of the |
---|
0:10:20 | camera and we tried to um provide information about how this space example by this |
---|
0:10:27 | camera using a light container elements |
---|
0:10:33 | and for ease of illustration we |
---|
0:10:37 | we have some simplifications here in that presentations i will the uh slice coming i |
---|
0:10:43 | will consider only one row of the on image sensor oh to be a starting |
---|
0:10:50 | weight and i will not go to the to an image sensor it would be |
---|
0:10:54 | too complicated to put it on a plane to show that i won't show like |
---|
0:10:58 | containers into the instead of three D representation by only go for xt representation and |
---|
0:11:04 | two angles the starting and finishing and the chip so this is a simplification we |
---|
0:11:09 | do for illustration purposes and |
---|
0:11:12 | there is one more thing uh in judea space instead of X Z if you |
---|
0:11:19 | have like containers like this predicted position and an angle starting to finish the angle |
---|
0:11:25 | we |
---|
0:11:26 | transforming them to position angle representation and disposition angle representation is basically x-axis and the |
---|
0:11:34 | taxes here so that the X one like here the key position same we have |
---|
0:11:39 | a span we have an angular span in that uh access and |
---|
0:11:44 | instead of seeing cones like this we will have |
---|
0:11:49 | piece of lines like this and |
---|
0:11:52 | we should have in mind that is lines are |
---|
0:11:55 | horizontal and be presented means the like is in focus it means there is one |
---|
0:12:00 | positional information associated to the whole line which is the cheap of the light depending |
---|
0:12:06 | on |
---|
0:12:08 | so we will face only positions like this horizontal lines in the sampling pattern cube |
---|
0:12:16 | and this one shows the simple idea behind how we generate the sampling pattern cube |
---|
0:12:22 | we basically start from the camera if we consider this part inside the camera and |
---|
0:12:27 | is the optical elements in the camera there can be only in main lens or |
---|
0:12:32 | a combination of different lens setups and this is the sensor plane |
---|
0:12:39 | we tried what we are going to do is to a form a light containers |
---|
0:12:44 | on the sensor plane based on physical properties of the sensibly light acceptance angle of |
---|
0:12:49 | the it's sensor so light acceptance angle and we define the first set up |
---|
0:12:57 | like containers then we backtracked this like containers into the C and you see my |
---|
0:13:02 | delight container is passing to an optical element that container transforms to a new container |
---|
0:13:09 | for example this one is transformed to this one so in new T position and |
---|
0:13:13 | angular information is associate that is um you meant to the light container and finally |
---|
0:13:19 | in an iterative process we a project all the initial light contenders to the three |
---|
0:13:26 | D scene in front of the camera and what we get is called the sampling |
---|
0:13:30 | pattern cube we will work with that later well this is a more formal presentation |
---|
0:13:36 | of the same process we have the flowchart we um |
---|
0:13:40 | actually form the like and painters and then go to this iterative no and process |
---|
0:13:47 | for a project all the like containers to the scene and finally we come up |
---|
0:13:51 | with the set of light containers in the form of the specimen |
---|
0:13:57 | well i will not go to the very detailed what to just give you some |
---|
0:14:01 | idea we have optical um elements like lenses or like apertures and so on you |
---|
0:14:09 | can refer to the paper for way more information but anyway for example here if |
---|
0:14:13 | like containers comes to an aperture for an aperture you know where the aperture you |
---|
0:14:19 | so we know which the plane the aperture temperatures located on and we know this |
---|
0:14:23 | time |
---|
0:14:25 | or |
---|
0:14:26 | the opening gary of the aperture and the lack container coming to this aperture well |
---|
0:14:31 | marcus cut out because it's not be in this kind of the aperture and part |
---|
0:14:35 | is staying here so we will have a new one that container like this cutting |
---|
0:14:41 | part which is not on inside aperture span and |
---|
0:14:48 | we will come to see point which is the new like container and we will |
---|
0:14:51 | go to the next iterated steps and for example for a lens |
---|
0:14:59 | if this is the lens plane and we know the focal properties of the lens |
---|
0:15:02 | and we know the lens equation then not contain upcoming this plane well you transform |
---|
0:15:08 | to a new one |
---|
0:15:10 | and you position and angular span is given to the new light container and we |
---|
0:15:16 | go to this process |
---|
0:15:19 | until all like containers are processed and no this is a very simple example the |
---|
0:15:24 | schematic them out |
---|
0:15:26 | a single lens system |
---|
0:15:28 | if this is the image sensor and this is the single lens system |
---|
0:15:33 | we have project information from the image plane to the three D space in front |
---|
0:15:38 | of the camera or here if Z welcomes us consider the plane of the main |
---|
0:15:44 | lens |
---|
0:15:45 | and see what the minus the use the plane of the image sensor then is |
---|
0:15:52 | lines showing the in focus light which is the form of the light container as |
---|
0:15:57 | it so before are i projected to another plane so you see that the angular |
---|
0:16:03 | span of the light that there is a big change as well as their positional |
---|
0:16:07 | information has been changed and now we have a new set of light containers in |
---|
0:16:12 | the form of the spc that we can extract properties of interest from |
---|
0:16:17 | well um we want to show that |
---|
0:16:21 | but like standing there is actually reflecting the behaviour of the system or in the |
---|
0:16:25 | better weren't the spc model in general is reflecting the behavior the sampling behavior of |
---|
0:16:30 | the system and to show that we are applying this explicit model to a known |
---|
0:16:36 | cases plenoptic camera |
---|
0:16:39 | in the uh can um conventional form and the focused plenoptic camera |
---|
0:16:45 | i hope you're familiar with system setups um i give some method with details about |
---|
0:16:51 | them but all well i think these are well known systems are system is uh |
---|
0:16:58 | containing the main lens lenslet carry an image sensor is placed behind the main the |
---|
0:17:04 | lenslet i and C two systems both have the same optical elements and the only |
---|
0:17:10 | difference between them is that distances between the a colour elements here we have |
---|
0:17:17 | uh it's between the lenslet area of the image sensor as the for the focal |
---|
0:17:22 | length of the lenslets |
---|
0:17:24 | here it's not the same we have space E and it is smaller than the |
---|
0:17:30 | and there is a relay system relation between the image plane which is here |
---|
0:17:36 | and the sensor image sensor |
---|
0:17:40 | and the main lens is pushed with forward so the spacings and basically different although |
---|
0:17:45 | the optical elements are the same and this is a slight difference gives them very |
---|
0:17:50 | different properties in terms of sampling and |
---|
0:17:53 | um high-level properties of the camera like resolution like um depth of field the local |
---|
0:18:00 | properties |
---|
0:18:02 | well these are just a bit more information about uh one of the camera |
---|
0:18:09 | i for the first split up to cover the conventional form and i would like |
---|
0:18:14 | to highlight that the spatial resolution is equal to the number of lenslets in this |
---|
0:18:18 | setup after we render images the spatial resolution of the images are equal to the |
---|
0:18:24 | um number of lenslets and there is a trade-off so if you raise the number |
---|
0:18:28 | of lenslets the spatial resolution scoring higher but angular resolution is going to be over |
---|
0:18:33 | so this is the main feature associated with pc i and i would come back |
---|
0:18:38 | to this point later |
---|
0:18:42 | and pc F structure |
---|
0:18:46 | which as a set has a relay system between the main lens um we commend |
---|
0:18:51 | as there is a relay system and um it can be considered as an aerial |
---|
0:18:56 | cameras inside the camera so the behaviour is more or less similar to the camera |
---|
0:19:01 | panning and there are multiple positional information for each angular information and the spatial resolution |
---|
0:19:09 | is which is a decoupled from the number of lenslets in this setup is the |
---|
0:19:14 | main difference between the two camera by and |
---|
0:19:19 | there also is that the numbers you have used for our simulations these are typical |
---|
0:19:25 | numbers and um there have been practical setups with this number of the basic thing |
---|
0:19:30 | i want to highlight here is that the only difference between none of the camera |
---|
0:19:34 | i and are in the spacings |
---|
0:19:39 | between the main lens and then on the other hand from the lens the area |
---|
0:19:42 | to the sensor and the rest of parameters are the same so what accuracies result |
---|
0:19:48 | from the difference in the |
---|
0:19:52 | spacings and these are typical |
---|
0:19:56 | in spc shapes we expect from the camera i and a lot of the camera |
---|
0:20:01 | that you see that we have a kind of area here sample and here we |
---|
0:20:05 | have very narrow area in the form of a line may be spread in the |
---|
0:20:09 | space |
---|
0:20:11 | and uh the angular span we see here is where considerable and angular span of |
---|
0:20:16 | the samples are very small here and oh |
---|
0:20:20 | here is a closer look the same information we have this for plenoptic camera i |
---|
0:20:29 | and we can see in there |
---|
0:20:32 | instead |
---|
0:20:33 | this is the area sampled actually um the density is too high so it's just |
---|
0:20:38 | we have the shape here as a color but if you look from the inside |
---|
0:20:42 | uh in setup is um |
---|
0:20:44 | information we can see the sample like samples your in the form of the light |
---|
0:20:49 | containers and we can see multiple angular samples for a single position is a single |
---|
0:20:55 | position this is expenses so there are multiple angular samples forcing the position and this |
---|
0:21:02 | one is showing samples coming from behind one lenslet so the information behind one lenslet |
---|
0:21:10 | captured on the image sensor are the formal column |
---|
0:21:14 | in the case of plenoptic camera i |
---|
0:21:18 | and this is that we also you |
---|
0:21:21 | basically the same data and this is the case um plenoptic camera have |
---|
0:21:27 | and we see the sampling properties are different these are the multiple position samples for |
---|
0:21:33 | one single angular span and we can see the sample data by pixels behind one |
---|
0:21:38 | lenslet |
---|
0:21:40 | i hope to give you the impression that um |
---|
0:21:43 | spc is following the behaviour of the camera system so on the next slide is |
---|
0:21:50 | showing actually if we apply variations in the camera system is variations are reflected in |
---|
0:21:55 | the spc and this variation i've decided to be the variation of the lenslet pitch |
---|
0:21:59 | size in this case we can see the information we bill in its size of |
---|
0:22:06 | um interest haven't changes when the its size of the lenslets are battery and we |
---|
0:22:13 | can see the trade off between angular and spatial resolution in one of the camera |
---|
0:22:17 | i case lighting or not the camera is there is no trade off and um |
---|
0:22:22 | it is confirming that this spc model is falling the behaviour of the system |
---|
0:22:28 | but i do not talk about uh about the feature extractors are which is an |
---|
0:22:32 | ongoing work also we are more or less now focusing on the resolution parameters and |
---|
0:22:38 | um |
---|
0:22:39 | this features extractors as a set can be informal focal plane field-of-view spatial resolution in |
---|
0:22:46 | different depth planes and angular resolution and that solution and different focal properties and um |
---|
0:22:52 | we hope to publish some um the results in this part and i want to |
---|
0:22:57 | conclude that the light field sampling behavior is reflected in this model and all since |
---|
0:23:04 | the spc preserves the vocal properties of the system is capable of on explaining high-level |
---|
0:23:10 | behavior of the system like focal uh like um |
---|
0:23:15 | depth of field or like a different phone rendering algorithms in different depths and it |
---|
0:23:22 | is capable of extracting the high level camera parameters of interest and at the same |
---|
0:23:27 | times if it keeps it simple but it is a high it has a high |
---|
0:23:32 | school level and well there are some future works and they are actually ongoing works |
---|
0:23:38 | related to this part and we are trying to investigate existing camera systems write ups |
---|
0:23:44 | as one of the major points of the system thank you prior intention |
---|
0:24:20 | well in this is taken consider them as a single optical element but there is |
---|
0:24:25 | no limitation we can do the other weighting it depends on what you're expecting from |
---|
0:24:30 | the model if the um for example you are going for precise result from the |
---|
0:24:37 | model or if you're combining two systems and you want to keep precision as much |
---|
0:24:41 | as possible while you're spending more on modeling the more complex systems this is a |
---|
0:24:47 | trade-off and you will decide about how to work with this model but this is |
---|
0:24:52 | explained basic behavior system and |
---|
0:25:04 | oh |
---|
0:25:05 | yeah and don't forget that we have and it's very sparse assumptions here you're working |
---|
0:25:10 | only with um |
---|
0:25:12 | geometrical optics and this is maybe event the worst i mean it's this is a |
---|
0:25:18 | stronger something compared to what you are discussing |
---|
0:25:31 | thank you |
---|