Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize playback controlled by Qt ffmpeg

2025-01-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to achieve Qt ffmpeg control playback". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to control playback with Qt ffmpeg.

I. Preface

When many people use ffmpeg to decode a video stream, they will encounter a problem: how to pause? if you open a local video file, you just need to stop decoding, but you will find that the video stream is useless at all. Once you stop decoding the code, the next time you re-decode it, it will still be the previous picture, and he will re-decode it from the place where you last paused. This is confused. Why? My personal understanding is video streaming. Once you open it, it keeps pouring in. If you don't handle it, there will be more and more of it. You have to read him and take the data from the buffer. So if you want to pause the video stream, the right thing to do is to decode it as usual, but just don't process and draw pictures. To put it bluntly, it's a fake pause. It seems to be paused, but the background is still decoding constantly.

When playing local files with ffmpeg, if there is no delay, you will find that brushing will finish the playback in a few seconds. Depending on the performance of the computer, a computer with good performance will only play a 5-minute video in a few seconds. Will it be very strange? how can it play so fast? in fact, ffmpeg decoding is only about decoding, crazy decoding mode, mission dry, drain your CPU or GPU resources (if hard decoding is enabled, go GPU). Each decoded frame contains pts dts and other information, and you need to delay processing according to this information. For example, before you reach the next frame, you will delay decoding for a period of time. As for how long the delay is, there is a general method to remember the start time after opening the stream. Then take the reference time time_base of the corresponding stream (video stream or audio stream, etc.) in decoding, call av_rescale_q to calculate the pts time, then use av_gettime ()-startTime to get the current time, and use pts_time-now_time to get the time difference. If it is a positive number, then this time is the number of microseconds to delay. Note that it is microseconds rather than milliseconds. Call av_usleep directly to delay.

II. Functional features

Multi-thread real-time playback video stream + local video + USB camera and so on.

Support windows+linux+mac, support ffmpeg3 and ffmpeg4, support 32-bit and 64-bit.

Multi-thread display image, do not card the main interface.

Automatically reconnect the webcam.

The border size can be set, that is, the offset and the border color.

You can set whether to draw OSD tags, that is, label text or pictures and label locations.

There are two OSD locations and styles that can be set.

You can set whether to save to a file and the file name.

You can drag the file directly to the ffmpegwidget control to play.

Supports h365 video streams + rtmp and other common video streams.

Playback can be paused and resumed.

Support for storing individual video files and storing video files at regular intervals.

Customize the top suspension bar and send click signal notification, which can be set whether it is enabled or not.

Can set screen stretch fill or proportional fill.

Programmable decoding is speed priority, quality priority and balanced processing.

You can take screenshots (original pictures) and screenshots of the video.

Video file storage supports naked streaming and MP4 files.

Qsv, dxva2, d3d11va and other hard decoding are supported.

Support opengl to draw video data with very low CPU usage.

Support embedded linux, cross-compilation can be done.

Third, effect picture

4. Core code void FFmpegWidget::updateImage (const QImage & image) {/ / pauses or invisible rtsp video stream needs to stop drawing if (! this- > property ("isPause"). ToBool () & & this- > isVisible () & & thread- > isRunning () {/ / copy the image has an advantage, when the processor is poor, the image will not produce a fault The disadvantage is that it takes time / / the default QImage type is shallow copy, which may have changed the upper part of the data in the picture this- > image = copyImage while you are drawing. Image.copy (): image; this- > update () }} void FFmpegWidget::updateFrame (AVFrame * frame) {# ifdef opengl / / pausing or invisible rtsp video streams need to stop drawing if (! this- > property ("isPause"). ToBool () & (yuvWidget- > isVisible () | | nv12Widget- > isVisible () & & thread- > isRunning ()) {/ / hardware accelerated rendering directly with nv12 Otherwise, use yuv to render if (thread- > getHardware () = = "none") {yuvWidget- > setFrameSize (frame- > width, frame- > height) YuvWidget- > updateTextures (frame- > data [0], frame- > data [1], frame- > data [2], frame- > linesize [0], frame- > linesize [1], frame- > linesize [2]);} else {nv12Widget- > setFrameSize (frame- > width, frame- > height); nv12Widget- > updateTextures (frame- > data [0], frame- > data [1], frame- > linesize [0], frame- > linesize [1]) }} # endif} void FFmpegThread::delayTime (int streamIndex, AVPacket * packet) {/ / Video streams do not need delay if (isRtsp) {return;} / / Files with no video duration and local files with asf use another delay to calculate if (streamIndex = = videoStreamIndex) {if (interval! = 1 | | videoTime

< 0 || url.toLower().endsWith("asf")) { sleepVideo(); return; } } qint64 offset_time = getDelayTime(streamIndex, packet); if (offset_time >

0) {av_usleep (offset_time);}} qint64 FFmpegThread::getDelayTime (int streamIndex, AVPacket * packet) {AVRational time_base = formatCtx- > streams [streamIndex]-> time_base; AVRational time_base_q = {1, AV_TIME_BASE}; int64_t pts_time = av_rescale_q (packet- > pts, time_base, time_base_q); int64_t now_time = av_gettime ()-startTime Int64_t offset_time = pts_time-now_time; return offset_time;} at this point, I believe you have a deeper understanding of "how to control playback by Qt ffmpeg". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report