The format of slow motion has been a great encouragement to the sector of cameras in mobile devices and even professionals. But what if we told you that perhaps very soon you will be able to convert any video shooting in slow motion, without the need of having activated the mode before or in case you are not prepared.
Because the loan would take NVIDIA, the world leader in graphics processing, in conjunction with the University of Massachusetts Amherst, and the University of California have created an algorithm based on Artificial Intelligence and machine learning able to produce videos of camera super slow high quality from clips conventional recorded at 30fps (fotograbas per second).
The technology would work by producing “artificially” fotograbas missing or intermediates when it is recorded in the format of 240fps or camera super slow. In this case the algorithm NVIDIA uses a machine learning system itself has been trained with more than 11,000 videos of action (using a Tesla V100 [a GPU of the signature, of course] and PyTorch, a neural network, which in turn was accelerated by cuDNN) and routine activities recorded at 240 frames per second. From this information the neural network predicts how it should be the frames that need to be added to slow down the video.
“Our method can generate multiple between frames that are consistent spatially and temporally”, said the researchers responsible for this great development. “Our approach of multiple frames far exceeds the methods of single frame of last generation”, they added.