![]() In above command all i needed to do is to force a resolution that the encoder supports plus deinterlace "yadif" because the h264_qsv failed on interlaced sources. Here some example that i use productive in a commandline processor to capture from decklink card (note that i have a custom built ffmpeg.exe for decklink support copied to c:\windows\system32:Ĭode: Select all %comspec% /C "ffmpeg.exe -y -f decklink -i "%s_decklink_card%" -c:v h264_qsv -b:v 4M -r 25 -s 1920x1080 -vf "yadif" -preset veryslow -segment_time %s_segment_duration_ch3% -f segment -strftime 1 -t %s_duration_ch3% "%s_directory_ch3%\chunk_%Y-%m-%d_%H-%M-%S.mp4" " I recognized that the nvenc encoder in ffmpeg in difference to other encoders like libx264 is not able to tell ffmpeg to insert automatic filters like colorspace filters, so depending on the source format, the encoder causes different errors about wrong colorspace and such. I have 2 ffastrans setups running that drive quicksync workflows using a commandline processor which starts ffmpeg but those setups are limited to "always the same source format". The final conclusion is that the ffastrans team has the problem that we mostly work in the production sector where the hardware acceleration does not help, it is for delivery/archiving/live. ![]() To be honest, personally i would prefer to work on quicksync support first because not everyone has an nvidia card built in but a lot of users have quicksync onboard with their Core I CPU.Īnyway, the topic was discussed deeply here:
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |