This will place your webcam overlay in the top right, and a logo in the bottom left:įfmpeg -threads:v 2 -threads:a 8 -filter_threads 2 \ See this answer on superuser for more details on the same. Note: Your webcam may natively support whatever frame size you want to overlay onto the main video, so scaling the webcam video as shown in this example can be omitted (just set the appropriate v4l2 -video_size and remove the scale=120:-1,). See the documentation on the video4linux2 (v4l2) input device for more info. You can see additional details your webcam with something like: ffmpeg -f v4l2 -list_formats all -i /dev/video0 threads:v 2 -threads:a 8 -filter_threads 2 \ If you want the output video frame size to be smaller than the input then you can insert the appropriate scale video filter for VAAPI: vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' \Īlso, ensure that the stream key used is correct, otherwise your live broadcast will fail. KMS surfaces can be mapped directly to VAAPI, as shown in the example below, with scaling enabled: ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 \ Where possible, stick to x11grab instead. Note that KMS capture is highly prone to failure on multi-GPU systems when using direct device derivation for VAAPI encoder contexts, as shown below. Note: Ensure that you've set the admin SETCAP capability for your FFmpeg binary if you intend to capture from KMS devices: sudo setcap cap_sys_admin+ep /path/to/ffmpegĪs shown in the examples below. init_hw_device vaapi=va:/dev/dri/renderD128,driver=i965 -filter_hw_device va \ bsf:a aac_adtstoasc -c:a aac -ac 2 -b:a 128k \ thread_queue_size 512 -f alsa -ac 2 -i hw:0,0 \ Part 2: Using Intel's VAAPI to achieve the same:įfmpeg -loglevel debug -threads:v 2 -threads:a 8 -filter_threads 2 \ The pulse input device (requires -enable-libpulse) can be an alternative to the ALSA input device, as in: You can also use it to automatically enter the input screen size: -video_size $(xwininfo -root | awk '/-geo/'). You can use xwininfo | grep geometry to select the target window and get placement coordinates. The example above will stream to both Youtube and twitch TV and at the same time, store a copy of the video stream on the local filesystem. i input -map 0 -bsf:a aac_adtstoasc -c:a aac -ac 2 -ar 48000 -b:a 128k \ If you want the output video frame size to be the same as the input for Twitch:įfmpeg -threads:v 2 -threads:a 8 -complex_filter_threads 2 -filter_threads 2 \ This does not apply to the likes of AMD's VCE (via VAAPI) or Intel's VAAPI and QSV implementations. You can override this artificial limit by using this project. On consumer SKUs, NVENC is limited to two encode sessions only. Note that higher thread counts for hardware-accelerated encoders and decoders have rapidly diminishing returns. The thread count is also lowered to ensure that VBV underflows do not occur. Substitute all variables (such as stream keys) with your own. For multiple outputs (where enabled), call up the versatile tee muxer, outputting to multiple streaming services (or to a file, if so desired). That way, we can provide a stream with a perfect fixed bit-rate (per variant, where applicable, with multiple outputs as shown in some examples below). Use a rate control mode that enforces a constant rate control mode. Omit the presets as they'll override the desired rate control mode. You want to ensure that you're encoding in CBR mode. Set the buffer size ( -bufsize:v) equal to the target bitrate ( -b:v). The following best practice observations apply when using a hardware-based encoder for live streaming to any platform: Streaming your Linux desktop to Youtube and Twitch via Nvidia's NVENC and VAAPI:Ĭonsiderations to take when live streaming:
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |