Video editor datamosh8/2/2023 ![]() ![]() ![]() If you were to track the entire scene, using many hundreds of points, you could possibly achieve a similar effect by gradually transitioning portions of your image from the first shot to the second. How to replicate this in Blender is no trivial task. Slowly, over time, some of those pixel vectors are updated with newer image chunks, hence why the dog gradually appears. ![]() However, without the I frame, it is moving chunks of the previous shot, as with your example. When we strip out the I frames, we end up with the motion vectors telling the decoder to move the same area of the image. Over time, smaller pieces might be updated because they have changed significantly from the original I frame. That is, the encoded file compression calculated what direction the pixels were travelling in, and uses those smaller chunks to save space by moving smaller portions of the image around instead of using a whole frame. Only I frames are whole images, with P and B frames being partial image chunks that move according to the pixel vectors in the encoded file. I frames are intra frames, or whole pictures, P frames are predictive, and B frames are bi-predictive. Not sure how familiar you are with video encodes, but a stream is frequently comprised of I, P, and B frames. That sample looks like a creative use of removing of an I frame in the stream. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |