In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
IOS is based on AVFoundation how to make video editing projects, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.
Recently, I did a small video editing project, stepped on some small holes, but still realized the function without danger.
In fact, Apple officially gave us a UIVideoEditController to do video processing, but it is difficult to expand or customize, so we use a framework AVFoundation given by Apple to develop custom video processing.
And found that there is no relevant and more systematic information on the Internet, so I wrote this article, hoping to help the novice (such as me) who is also doing video processing.
Project effect drawing
The function of the project is probably to undo, split and delete the video track and drag and drop video blocks to expand or retreat the video.
Function realization 1. Select video and play it
Select a video via UIImagePickerController and jump to a custom editing controller
There's nothing to say about this part.
Example:
/ / Select video @ objc func selectVideo () {if UIImagePickerController.isSourceTypeAvailable (.photoLibrary) {/ / initialize the picture controller let imagePicker = UIImagePickerController () / / set the agent imagePicker.delegate = self / / specify the picture controller type imagePicker.sourceType =. PhotoLibrary / / display only video type files imagePicker.mediaTypes = [kUTTypeMovie as String] / / pop-up controller Display interface self.present (imagePicker, animated: true, completion: nil)} else {print ("error reading album")}} func imagePickerController (_ picker: UIImagePickerController) DidFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {/ / get the video path (videos are automatically copied to the temporary app folder after selection) guard let videoURL = info [UIImagePickerController.InfoKey.mediaURL] as? URL else {return} let pathString = videoURL.relativePath print ("video address:\ (pathString)") / / picture controller exits self.dismiss (animated: true, completion: {let editorVC = EditorVideoViewController.init (with: videoURL) editorVC.modalPresentationStyle = UIModalPresentationStyle.fullScreen self.present (editorVC) Animated: true) {}}) II. Initialize the video track by getting thumbnails by frame
CMTime
Before we talk about the implementation method, let's introduce that CMTime,CMTime can be used to describe a more precise time. For example, we want to express a moment in the video, such as 1:01. Most of the time you can use NSTimeInterval t = 61.00.There is no big problem, but the more serious problem with floating-point numbers is that they cannot accurately express 10 to the power of 6. For example, if you add up a million 0.0000001, the result may be 1.0000000000079181. In the process of video transmission, there is a large amount of data addition and subtraction, which will cause errors, so we need another way to express time, that is, CMTime.
CMTime is a C function structure with four members.
Typedef struct {
CMTimeValue value; / / the value of the current CMTimeValue
CMTimeScale timescale; / / current reference standard for CMTimeValue (e.g. 1000)
CMTimeFlags flags
CMTimeEpoch epoch
} CMTime
For example, as we usually say, if timescale = 1000, then CMTimeValue = 1000 * 1 = 100.
CMTimeScale timescale: the current reference standard for CMTimeValue, which indicates how many parts a second of time is divided into. Because the accuracy of the whole CMTime is controlled by it, it is particularly important. For example, when timescale is 1, CMTime cannot represent a time of less than a second and an increase within a second. Similarly, when the timescale is 1000, each second is divided into 1000 parts, and the value of CMTime represents how many milliseconds.
Realization method
Call the method generateCGImagesAsynchronously (forTimes requestedTimes: [NSValue], completionHandler handler: @ escaping AVAssetImageGeneratorCompletionHandler)
/ * * @ method generateCGImagesAsynchronouslyForTimes:completionHandler: @ abstract Returns a series of CGImageRefs for an asset at or near the specified times. @ param requestedTimes An NSArray of NSValues, each containing a CMTime, specifying the asset times at which an image is requested. @ param handler A block that will be called when an image request is complete. Discussion Employs an efficient "batch mode" for getting images in time order. The client will receive exactly one handler callback for each requested time in requestedTimes. Changes to generator properties (snap behavior, maximum size, etc...) Will not affect outstanding asynchronous image generation requests. The generated image is not retained. Clients should retain the image if they wish it to persist after the completion handler returns. * / open func generateCGImagesAsynchronously (forTimes requestedTimes: [NSValue], completionHandler handler: @ escaping AVAssetImageGeneratorCompletionHandler)
If you browse the official comments, you can see that you need to pass in two parameters:
RequestedTimes: [NSValue]: an array of request time (type NSValue) each element contains a CMTime, which is used to specify the time of request for video.
CompletionHandler handler: @ escaping AVAssetImageGeneratorCompletionHandler: the block that will be called when the image request is completed. Since the method is called asynchronously, you need to return the main thread to update the UI.
Example:
Func splitVideoFileUrlFps (splitFileUrl:URL, fps:Float, splitCompleteClosure:@escaping (Bool, [UIImage])-> Void) {var splitImages = [UIImage] () / / initialize Asset let optDict = NSDictionary (object: NSNumber (value: false), forKey: AVURLAssetPreferPreciseDurationAndTimingKey as NSCopying) let urlAsset = AVURLAsset (url: splitFileUrl, options: optDict as? [String: Any]) let cmTime = urlAsset.duration let durationSeconds: Float64 = CMTimeGetSeconds (cmTime) var times = [NSValue] () let totalFrames: Float64 = durationSeconds * Float64 (fps) var timeFrame: CMTime / / define CMTime that is the interval between requests for thumbnails for i in 0...Int (totalFrames) {timeFrame = CMTimeMake (value: Int64 (I)) Timescale: Int32 (fps) let timeValue = NSValue (time: timeFrame) times.append (timeValue)} let imageGenerator = AVAssetImageGenerator (asset: urlAsset) imageGenerator.requestedTimeToleranceBefore = CMTime.zero imageGenerator.requestedTimeToleranceAfter = CMTime.zero let timesCount = times.count / / call the method imageGenerator.generateCGImagesAsynchronously (forTimes: times) {(requestedTime, image, actualTime) {(requestedTime, image, actualTime) to get thumbnails Result, error) in var isSuccess = false switch (result) {case AVAssetImageGenerator.Result.cancelled: print ("cancelled-") case AVAssetImageGenerator.Result.failed: print ("failed+") case AVAssetImageGenerator.Result.succeeded: let framImg = UIImage (cgImage: image!) SplitImages.append (self.flipImage (image: framImg, orientaion: 1)) if (Int (requestedTime.value) = = (timesCount-1)) {/ / callback assignment isSuccess = true splitCompleteClosure (isSuccess) at the last frame SplitImages) print ("completed")}} / / updates UIself.splitVideoFileUrlFps (splitFileUrl: url, fps: 1) {[weak self] (isSuccess, splitImgs) in if isSuccess {/ / because the method is asynchronous So you need to go back to the main thread to update UI DispatchQueue.main.async {} print ("Total number of pictures imgcount:\ (String (describing: self?.imageArr.count)")}} 3. Video redirect at specified time / * * @ method seekToTime:toleranceBefore:toleranceAfter: @ abstract Moves the playback cursor within a specified time bound. Param time @ param toleranceBefore @ param toleranceAfter @ discussion Use this method to seek to a specified time for the current player item. The time seeked to will be within the range [time-toleranceBefore, time+toleranceAfter] and may differ from the specified time for efficiency. Pass kCMTimeZero for both toleranceBefore and toleranceAfter to request sample accurate seeking which may incur additional decoding delay. Messaging this method with beforeTolerance:kCMTimePositiveInfinity and afterTolerance:kCMTimePositiveInfinity is the same as messaging seekToTime: directly. * / open func seek (to time: CMTime, toleranceBefore: CMTime, toleranceAfter: CMTime)
The three passed parameters time: CMTime, toleranceBefore: CMTime, tolearnceAfter: CMTime, and time are easy to understand, that is, the time you want to jump. So the last two parameters, according to the official comments, are simply "error tolerance". They will jump within the range you have proposed, that is, [time-toleranceBefore, time+toleranceAfter]. Of course, if you pass kCMTimeZero (in my current version, this parameter has been changed to CMTime.zero), it will be an accurate search, but this will lead to extra decoding time.
Example:
Let totalTime = self.avPlayer.currentItem?.duration let scale = self.avPlayer.currentItem?.duration.timescale / / width: length of video track to jump to videoWidth: total length of video track let process = width / videoWidth / / fast forward function self.avPlayer.seek (to: CMTimeMake (value: Int64 (totalTime * process * scale!), timescale: scale!), toleranceBefore: CMTime.zero, toleranceAfter: CMTime.zero) 4, player monitoring
Through the monitoring of the player, we can change the movement of the control track to achieve the linkage between the video player and the video track.
/ * * @ method addPeriodicTimeObserverForInterval:queue:usingBlock: @ abstract Requests invocation of a block during playback to report changing time. @ param interval The interval of invocation of the block during normal playback, according to progress of the current time of the player. @ param queue The serial queue onto which block should be enqueued. If you pass NULL, the main queue (obtained using dispatch_get_main_queue ()) will be used. Passing a concurrent queue to this method will result in undefined behavior. @ param block The block to be invoked periodically. @ result An object conforming to the NSObject protocol. You must retain this returned value as long as you want the time observer to be invoked by the player. Pass this object to-removeTimeObserver: to cancel time observation. @ discussion The block is invoked periodically at the interval specified, interpreted according to the timeline of the current item. The block is also invoked whenever time jumps and whenever playback starts or stops. If the interval corresponds to a very short interval in real time, the player may invoke the block less frequently than requested. Even so, the player will invoke the block sufficiently often for the client to update indications of the current time appropriately in its end-user interface. Each call to-addPeriodicTimeObserverForInterval:queue:usingBlock: should be paired with a corresponding call to-removeTimeObserver:. Releasing the observer object without a call to-removeTimeObserver: will result in undefined behavior. * / open func addPeriodicTimeObserver (forInterval interval: CMTime, queue: DispatchQueue?, using block: @ escaping (CMTime)-> Void)-> Any
One of the more important parameters is interval: CMTime, which determines the interval between code callbacks, and if you change the frame of the video track in this callback, it will also determine the fluency of the video track movement.
Example:
/ / player snooping self.avPlayer.addPeriodicTimeObserver (forInterval: CMTimeMake (value: 1, timescale: 120), queue: DispatchQueue.main) {[weak self] (time) in / / linkage operation with orbit}
The problem of conflict with the fast forward method
This listening method and the fast forward method in the third point will cause a problem: this callback will also be triggered when you drag the video track and go to fast forward, which results in the endless loop of dragging the video track frame (change frame) > fast forward method-> trigger callback-> change frame. Then you have to add a judgment condition not to trigger the callback.
Problems caused by fast-forward method and player linkage
The playback of the video is asynchronous, and the fast-forward method takes time to decode the video, so it leads to the time difference in the process of interaction between the two sides. And when you think that the video has been fast-forward completed, you want to change the position of the video track. Due to the time brought by decoding, several wrong times will be passed into the callback, causing the video track to shake back and forth. Therefore, the practice of the current project is to determine whether the frame to be changed is legal (too large or too small) during the callback.
Ps: if there are better solutions to these two problems, welcome to discuss them together!
5. Export video / * * @ method insertTimeRange:ofTrack:atTime:error: @ abstract Inserts a timeRange of a source track into a track of a composition. @ param timeRange Specifies the timeRange of the track to be inserted. @ param track Specifies the source track to be inserted. Only AVAssetTracks of AVURLAssets and AVCompositions are supported (AVCompositions starting in MacOS X 10.10 and iOS 8.0). @ param startTime Specifies the time at which the inserted track is to be presented by the composition track. You may pass kCMTimeInvalid for startTime to indicate that the timeRange should be appended to the end of the track. @ param error Describes failures that may be reported to the user, e.g. The asset that was selected for insertion in the composition is restricted by copy-protection. @ result A BOOL value indicating the success of the insertion. @ discussion You provide a reference to an AVAssetTrack and the timeRange within it that you want to insert. You specify the start time in the target composition track at which the timeRange should be inserted. Note that the inserted track timeRange will be presented at its natural duration and rate. It can be scaled to a different duration (and presented at a different rate) via-scaleTimeRange:toDuration:. * / open func insertTimeRange (_ timeRange: CMTimeRange, of track: AVAssetTrack, at startTime: CMTime) throws
The three parameters passed in:
TimeRange: CMTimeRange: specifies the time range of the video to be inserted
Track: AVAssetTrack: specifies the video track to insert. Only AvassetTrack for AVURLAssets and AVCompositions (AVCompositions starting with MacOS X 10.10 and iOS 8.0) is supported.
StarTime: CMTime: specify the point in time when the composite video is inserted. You can pass the kCMTimeInvalid parameter to specify that the video should be appended to the end of the previous video.
Example:
Let composition = AVMutableComposition () / / merge video and audio tracks let videoTrack = composition.addMutableTrack (withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID ()) let audioTrack = composition.addMutableTrack (withMediaType: AVMediaType.audio PreferredTrackID: CMPersistentTrackID () let asset = AVAsset.init (url: self.url) var insertTime: CMTime = CMTime.zero let timeScale = self.avPlayer.currentItem?.duration.timescale / / cycle the information of each fragment for clipsInfo in self.clipsInfoArr {/ / Total time of the fragment let clipsDuration = Double (Float (clipsInfo.width)) / self.videoWidth) * self.totalTime / / start time of the clip let startDuration =-Float (clipsInfo.offset) / self.perSecondLength do {try videoTrack?.insertTimeRange (CMTimeRangeMake (start: CMTimeMake (value: Int64 (startDuration * Float (timeScale!) Timescale: timeScale!), duration:CMTimeMake (value: Int64 (clipsDuration * Double (timeScale!), timescale: timeScale!), of: asset.tracks (withMediaType: AVMediaType.video) [0], at: insertTime)} catch _ {} do {try audioTrack?.insertTimeRange (start: CMTimeMake (value: Int64 (startDuration * Float (timeScale!), timescale: timeScale!), duration:CMTimeMake (value: Int64 (clipsDuration * Double (timeScale!) Timescale: timeScale!), of: asset.tracks (withMediaType: AVMediaType.audio) [0], at: insertTime)} catch _ {} insertTime = CMTimeAdd (insertTime, CMTimeMake (value: Int64 (clipsDuration * Double (timeScale!)) Timescale: timeScale!)} videoTrack?.preferredTransform = CGAffineTransform (rotationAngle: CGFloat.pi / 2) / / get the merged video path let documentsPath = NSSearchPathForDirectoriesInDomains (.documentDirectory, .userDomainMask, true) [0] let destinationPath = documentsPath + "/ mergeVideo-\ (arc4random () 00) .mov" print ("merged video:\ (destinationPath)")
Through these API plus interactive logic, you can achieve a complete editing function!
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.