In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
The content of this article mainly focuses on how to collect a frame of audio in Android. The content of the article is clear and clear. It is very suitable for beginners to learn and is worth reading. Interested friends can follow the editor to read together. I hope you can get something through this article!
Android SDK provides two sets of API for audio acquisition, namely: MediaRecorder and AudioRecord, the former is a higher-level API, which can directly encode and compress the audio data recorded by the phone microphone (such as AMR, MP3, etc.) and save it into files, while the latter is closer to the bottom and can be controlled more freely and flexibly, and can get the original frame of PCM audio data.
If you want to simply make a tape recorder and record it into an audio file, it is recommended to use MediaRecorder. If you need to do further algorithm processing for audio, or use a third-party coding library for compression, network transmission and other applications, it is recommended to use AudioRecord. In fact, the bottom layer of MediaRecorder also calls AudioRecord to interact with the AudioFlinger of the Android Framework layer.
Audio development, the wider application is not limited to local recording, therefore, we need to focus on how to use the lower level of AudioRecord API to collect audio data (note that the audio data collected using it is the original PCM format, if you want to compress it to mp3,aac format, you need to call the encoder for coding).
1. The workflow of AudioRecord
First, let's take a look at the workflow of AudioRecord:
(1) configure parameters to initialize the internal audio buffer
(2) start collecting
(3) A thread is needed to constantly "read" audio data from AudioRecord's buffer. Note that this process must be timely, otherwise there will be a "overrun" error, which is more common in audio development, which means that the application layer does not "pick up" audio data in time, resulting in an internal audio buffer overflow.
(4) stop collecting and release resources
2. Parameter configuration of AudioRecord
Above is the constructor of AudioRecord. We can find that it mainly depends on the constructor to configure the collection parameters. Let's explain the meaning of these parameters one by one (it is recommended to compare them with my previous article):
(1) audioSource
This parameter refers to the input source of audio capture. Optional values are defined in the MediaRecorder.AudioSource class in the form of constant. Commonly used values include: DEFAULT (default), VOICE_RECOGNITION (for speech recognition, equivalent to DEFAULT), MIC (input by phone microphone), VOICE_COMMUNICATION (for VoIP applications), and so on.
(2) sampleRateInHz
Sampling rate, note that at present 44100Hz is the only one that can guarantee the sampling rate of all Android phones.
(3) channelConfig
For the configuration of the number of channels, the optional values are defined in the AudioFormat class in the form of constants, such as CHANNEL_IN_MONO (single channel) and CHANNEL_IN_STEREO (dual channel).
(4) audioFormat
This parameter is used to configure the "data bit width", the optional value is also defined as a constant in the AudioFormat class, commonly used is ENCODING_PCM_16BIT (16bit), ENCODING_PCM_8BIT (8bit), note that the former is guaranteed to be compatible with all Android phones.
(5) bufferSizeInBytes
This is the most difficult and important parameter to understand. It configures the size of the audio buffer inside the AudioRecord. The value of this buffer cannot be lower than the size of a "Frame". As described in the previous article, the size of an audio frame is calculated as follows:
Int size = sampling rate x bit width x sampling time x number of channels
The sampling time is generally between 2.5ms~120ms, which is determined by the manufacturer or the specific application. In fact, we can infer that the shorter the sampling time of each frame, the smaller the delay. Of course, the more fragmented data will be.
In Android development, the AudioRecord class provides a function to help you determine the bufferSizeInBytes. The prototype is as follows:
Int getMinBufferSize (int sampleRateInHz, int channelConfig, int audioFormat)
The underlying implementation of different manufacturers is different, but nothing more than the above calculation formula to get the size of a frame, the size of the audio buffer must be 2N times the size of a frame, interested friends can continue to explore the source code.
In actual development, it is strongly recommended that this function calculate the bufferSizeInBytes that needs to be passed in, rather than manually.
3. Audio acquisition thread
After the AudioRecord object is created, you can start collecting audio data. Use the following two functions to control the start / stop of the collection:
AudioRecord.startRecording ()
AudioRecord.stop ()
Once the collection starts, the audio must be removed as soon as possible through the thread loop, otherwise overrun will appear in the system. The API for reading data called is:
AudioRecord.read (byte [] audioData, int offsetInBytes, int sizeInBytes)
4. Sample code
I simply encapsulated the interface of the AudioRecord class and provided an AudioCapturer class, which can be downloaded from my Github: https://github.com/Jhuster/Android/blob/master/Audio/AudioCapturer.java
A copy is also posted here:
/ * * COPYRIGHT NOTICE * Copyright (C) 2016, Jhuster * https://github.com/Jhuster/Android * * @ license under the Apache License, Version 2.0 * * @ file AudioCapturer.java * * @ version 2016 * @ author Jhuster * @ date 2016-03-10 * / import android.media.AudioFormat;import android.media.AudioRecord;import android.media.MediaRecorder;import android.util.Log Public class AudioCapturer {private static final String TAG = "AudioCapturer"; private static final int DEFAULT_SOURCE = MediaRecorder.AudioSource.MIC; private static final int DEFAULT_SAMPLE_RATE = 44100; private static final int DEFAULT_CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO; private static final int DEFAULT_AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT; private AudioRecord mAudioRecord; private int mMinBufferSize = 0; private Thread mCaptureThread; private boolean mIsCaptureStarted = false; private volatile boolean mIsLoopExit = false Private OnAudioFrameCapturedListener mAudioFrameCapturedListener; public interface OnAudioFrameCapturedListener {public void onAudioFrameCaptured (byte [] audioData);} public boolean isCaptureStarted () {return mIsCaptureStarted;} public void setOnAudioFrameCapturedListener (OnAudioFrameCapturedListener listener) {mAudioFrameCapturedListener = listener;} public boolean startCapture () {return startCapture (DEFAULT_SOURCE, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_CONFIG, DEFAULT_AUDIO_FORMAT) } public boolean startCapture (int audioSource, int sampleRateInHz, int channelConfig, int audioFormat) {if (mIsCaptureStarted) {Log.e (TAG, "Capture already started!"); return false;} mMinBufferSize = AudioRecord.getMinBufferSize (sampleRateInHz,channelConfig,audioFormat); if (mMinBufferSize = = AudioRecord.ERROR_BAD_VALUE) {Log.e (TAG, "Invalid parameter!") Return false;} Log.d (TAG, "getMinBufferSize =" + mMinBufferSize+ "bytes!"); mAudioRecord = new AudioRecord (audioSource,sampleRateInHz,channelConfig,audioFormat,mMinBufferSize); if (mAudioRecord.getState () = = AudioRecord.STATE_UNINITIALIZED) {Log.e (TAG, "AudioRecord initialize fail!"); return false } mAudioRecord.startRecording (); mIsLoopExit = false; mCaptureThread = new Thread (new AudioCaptureRunnable ()); mCaptureThread.start (); mIsCaptureStarted = true; Log.d (TAG, "Start audio capture success!"); return true;} public void stopCapture () {if (! mIsCaptureStarted) {return } mIsLoopExit = true; try {mCaptureThread.interrupt (); mCaptureThread.join (1000);} catch (InterruptedException e) {e.printStackTrace ();} if (mAudioRecord.getRecordingState () = = AudioRecord.RECORDSTATE_RECORDING) {mAudioRecord.stop () } mAudioRecord.release (); mIsCaptureStarted = false; mAudioFrameCapturedListener = null; Log.d (TAG, "Stop audio capture success!") } private class AudioCaptureRunnable implements Runnable {@ Override public void run () {while (! mIsLoopExit) {byte [] buffer = new byte [mMinBufferSize]; int ret = mAudioRecord.read (buffer, 0, mMinBufferSize) If (ret = = AudioRecord.ERROR_INVALID_OPERATION) {Log.e (TAG, "Error ERROR_INVALID_OPERATION");} else if (ret = = AudioRecord.ERROR_BAD_VALUE) {Log.e (TAG, "Error ERROR_BAD_VALUE") } else {if (mAudioFrameCapturedListener! = null) {mAudioFrameCapturedListener.onAudioFrameCaptured (buffer);} Log.d (TAG, "OK, Captured" + ret+ "bytes!") }
Note that the following permissions are added before use:
Thank you for your reading. I believe you have a certain understanding of "how to collect a frame of audio in Android". Go ahead and practice it. If you want to know more about it, you can follow the website! The editor will continue to bring you better articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.