annaebooks.blogg.se

Motion parallax plane
Motion parallax plane













motion parallax plane

the union of salency maps of all frames.We create the BG mask by 2 different ways:.To make the background region look stable, we create a fixed background (BG) image by taking median at each pixel along all frames, then fill the BG pixels (based on the BG mask) in each frame with the corresponding pixels of the median BG image.For previous 2 tasks, we use saliency mask to filter out all non-salient sift features, while for live photo, we align the images based on the background features (outside the saliency mask).Next we want to show a succesful example which the image plane is align with the gravity direction and thus the foreground features can easily be modeled by affine transform. Align by homography: the homography can model plane to plane transform in 3D space which make the table plane fix in the result.As a result, fix the center of the table and make it look like rotate alone the table center. Align by affine: the affine alignment can model image shift, scale and inplane rotation.Align by translation: in this failure example, the foreground features are rotate in the image plane and thus make it impossible to model by only shift the image.We first showing a case mos burger to depict the different of each alignment models: Do nothing The only different is that we apply the same process like motion parallax in each adjacency frames and align all the image to the middle frame. The implementation of stop motion effect is very similar to motion parallax. Please see Stop Motion effect for better understanding the different of the three alignment models. The homography model is the strongest and can align features on different 3D planes. The affine model have dof=5 and can translate, scale and inplane rotate the image. The translation model have 2 degree of freedom and can only left/right/top/down shift the image. We show the result by different alignment algorithm. Yet another example of motion parallax: Img1 Note that the saliency masks are actually a grey images, we blend them with the original color image for better visualization: Saliency of Img1įinally, we align Img1 to Img2 by homography and also showing the result of fixing pixels in saliency region: Do nothing We use saliency mask to keep sift features of the city caffe. The first example is a city caffe, if we simply stack the two image without doing anything we will get: Img1 To to this, we use the saliency mask to filter all non foreground sift features and align the two images based on the features inside the saliency mask. In this effect, we want to align the two images such that the main role, the saliency, move as little as possible. In below, we will present the results of each effect and briefly describe them, showing the results of different setting.

motion parallax plane

We use a state-of-the-art saliency prediction model, PiCANet ( paper, github, Liu et al., CVPR'18), to yield saliency mask of each image for later 3D visual effects. the translation of the features) to classify the detected features into foreground or background. We use the mask to help decide which features to track or filter, instead of defining rule (e.g. The use of saliency mask in this multi-view visual effects project is obivious. Saliency is defined by the most noticeable part in the image for a human. Here we use saliency mask to help crating many types of multi-view 3D visual effects including:Īlso with the help of saliency mask, we can post-processing the image to fix the color of certain pixels and thus enhance the final effect (detail describe in below). This is NTHU CVFX course project 5 of team 11.















Motion parallax plane