Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use AudioTrack API

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "how to use AudioTrack API". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to use AudioTrack API.

One purpose

The purpose of this article is to analyze the code of Android from the Audio system, including the mechanism of Android customization and the use of some common classes, such as Thread,MemoryBase.

The process of analysis is as follows:

L start with a class corresponding to the API layer, and the user layer must first have a simple usage process.

According to this process, step by step into the JNI, the service layer. In the process, classes or methods that you are not familiar with or have seen for the first time will be explained. That is, depth first.

1.1 Analytical tools

The analysis tool is simple: API doc documents for sourceinsight and android. Of course, there is also the source code of android. I am here based on the froyo source code.

Note that the froyo source code is too much, do not add to the sourceinsight, just add the source code under the framwork directory, and then add other directories if you want to use it later.

Two-Audio system

Let's take a look at what's in Audio. Through Android's SDK document, it is found that there are three main ones:

L AudioManager: this is mainly used to manage Audio systems

L AudioTrack: this is mainly used to play sound.

L AudioRecord: this is mainly used for recording

Among them, the understanding of AudioManager needs to consider the strategy of the sound on the whole system, such as phone ring tone, SMS ring tone and so on. Generally speaking, the easiest thing is to play the sound. So we're going to start with AudioTrack.

Three AudioTrack (JAVA layer)

3.1 examples of using AudioTrack API

Take a look at the usage examples first, and then follow them for analysis. As for other methods and instructions for using AudioTrack, you need to read the API documentation for yourself.

/ / the size of frame is obtained according to sampling rate, sampling accuracy, mono and dual channels. Int bufsize = AudioTrack.getMinBufferSize (8000 AudioFormat.CHANNEL_CONFIGURATION_STEREO,// / 8K points per second dual channel AudioFormat.ENCODING_PCM_16BIT); / / 16 bits-2 bytes per sampling point / / Note, according to the knowledge of digital audio, this calculates the size of one second buffer. / / create AudioTrack AudioTrack trackplayer = new AudioTrack (AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_ STEREO, AudioFormat.ENCODING_PCM_16BIT, bufsize, AudioTrack.MODE_STREAM); / / trackplayer.play (); / / start trackplayer.write (bytes_pkg, 0, bytes_pkg.length); / / write data to track … . Trackplayer.stop (); / / stop playing trackplayer.release (); / / release the underlying resources.

Here are two things to explain:

1 AudioTrack.MODE_STREAM means:

There are two categories of MODE_STATIC and MODE_STREAM in AudioTrack. STREAM means that the user writes the data to the audiotrack one at a time in the application through write. This is the same as sending data in socket, where the application layer gets the data from somewhere, for example, by encoding and decoding the PCM data, and then write it to audiotrack.

The disadvantage of this approach is that it always interacts with the Native layer in the JAVA layer, resulting in a great loss of efficiency.

STATIC means to put the audio data into a fixed buffer at the beginning of creation, and then send it directly to audiotrack, so that you don't have to get write again and again. AudioTrack will play the data in the buffer itself.

This method is very suitable for sounds such as ringtones which take up less memory and require higher delay.

2 StreamType

This is used in the * parameters for constructing AudioTrack. This parameter is related to the AudioManager in Android and relates to the audio management strategy on the phone.

Android classifies the sounds of the system into the following common (not fully written) categories:

L STREAM_ALARM: warning sound

L STREAM_MUSCI: music sound, such as music, etc.

L STREAM_RING: ringtone

L STREAM_SYSTEM: system sound

L STREAM_VOCIE_CALL: phone voice

Why do you have to share so much? In the past, when developing on desktops, it was rare to know that there were so many sound types, but when you think about it, it makes sense. For example, when you receive a call while listening to music, music playback will definitely stop, and you can only hear the phone at this time. If you adjust the volume, this adjustment must only work on the phone. When you're done with the call and go back to music, you certainly don't have to adjust the volume anymore.

In fact, the system manages the data of these sounds separately, so for AudioTrack, it means to tell the system which type of sound I want to use now, so that the system can manage them accordingly.

3.2Analytical getMinBufferSize

The example of AudioTrack is just a few functions. First, take a look at * functions:

AudioTrack.getMinBufferSize (8000 AudioFormat.CHANNEL_CONFIGURATION_STEREO,// / 8K dots per second AudioFormat.CHANNEL_CONFIGURATION_STEREO,// dual channel AudioFormat.ENCODING_PCM_16BIT);-> AudioTrack.JAVA / / Note, this is a static function static public int getMinBufferSize (int sampleRateInHz, int channelConfig, int audioFormat) {int channelCount = 0 Switch (channelConfig) {case AudioFormat.CHANNEL_OUT_MONO: case AudioFormat.CHANNEL_CONFIGURATION_MONO: channelCount = 1; break; case AudioFormat.CHANNEL_OUT_STEREO: case AudioFormat.CHANNEL_CONFIGURATION_STEREO: channelCount = 2 See, the name outside is so cool that it actually refers to the number of channels break; default: loge ("getMinBufferSize (): Invalid channel configuration."); return AudioTrack.ERROR_BAD_VALUE } / / currently only PCM8 and PCM16 precision audio if ((audioFormat! = AudioFormat.ENCODING_PCM_16BIT) & & (audioFormat! = AudioFormat.ENCODING_PCM_8BIT) {loge ("getMinBufferSize (): Invalid audio format."); return AudioTrack.ERROR_BAD_VALUE is supported. } / / ft, sampling frequency is also required, too low or too high, the human ear resolution is between 20HZ and 40KHZ if ((sampleRateInHz)

< 4000) || (sampleRateInHz >

48000) {loge ("getMinBufferSize ():" + sampleRateInHz + "Hz is not a supported sample rate."); return AudioTrack.ERROR_BAD_VALUE;} / / call the native function, which is annoying enough to get everything done to the JNI layer. Int size = native_get_min_buff_size (sampleRateInHz, channelCount, audioFormat); if ((size = =-1) | | (size = = 0) {loge ("getMinBufferSize (): error querying hardware"); return AudioTrack.ERROR;} else {return size } native_get_min_buff_size--- > is implemented in framework/base/core/jni/android_media_track.cpp. Those who do not understand JNI must learn, otherwise they can only do it at the JAVA layer, which is too narrow. ) finally corresponds to the function static jint android_media_AudioTrack_get_min_buff_size (JNIEnv * env, jobject thiz, jint sampleRateInHertz, jint nbChannels, jint audioFormat) {/ / Note that the parameter we passed in is: / / sampleRateInHertz = 8000 / / nbChannels = 2 / / audioFormat = AudioFormat.ENCODING_PCM_16BIT int afSamplingRate; int afFrameCount; uint32_t afLatency; / / the following involves AudioSystem, which will not be explained here. / / anyway, we know that some information has been queried from AudioSystem. If (AudioSystem::getOutputSamplingRate (& afSamplingRate)! = NO_ERROR) {return-1;} if (AudioSystem::getOutputFrameCount (& afFrameCount)! = NO_ERROR) {return-1 } if (AudioSystem::getOutputLatency (& afLatency)! = NO_ERROR) {return-1;} / / the most common unit in audio is frame. What do you mean? After multi-party search, * * has found an explanation in the wiki of ALSA. A frame is the number of bytes of a sampling point * channel. Why did you get a frame? Because for multi-channel, the number of bytes of one sampling point is incomplete, because when playing, the data of multiple channels must be broadcast. So for convenience, say how many frame there are in one second, so that you can put aside the number of channels and express your meaning completely. / / Ensure that buffer depth covers at least audio hardware latency uint32_t minBufCount = afLatency / (1000 * afFrameCount) / afSamplingRate); if (minBufCount

< 2) minBufCount = 2; uint32_t minFrameCount = (afFrameCount*sampleRateInHertz*minBufCount)/afSamplingRate; //下面根据最小的framecount计算最小的buffersize int minBuffSize = minFrameCount * (audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1) * nbChannels; return minBuffSize; } getMinBufSize函数完了后,我们得到一个满足最小要求的缓冲区大小。这样用户分配缓冲区就有了依据。下面就需要创建AudioTrack对象了 3.3 分析之new AudioTrack 先看看调用函数: AudioTrack trackplayer = new AudioTrack( AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_ STEREO, AudioFormat.ENCODING_PCM_16BIT, bufsize, AudioTrack.MODE_STREAM);// 其实现代码在AudioTrack.java中。 public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode) throws IllegalArgumentException { mState = STATE_UNINITIALIZED; // 获得主线程的Looper,这个在MediaScanner分析中已经讲过了 if ((mInitializationLooper = Looper.myLooper()) == null) { mInitializationLooper = Looper.getMainLooper(); } //检查参数是否合法之类的,可以不管它 audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode); //我是用getMinBufsize得到的大小,总不会出错吧? audioBuffSizeCheck(bufferSizeInBytes); // 调用native层的native_setup,把自己的WeakReference传进去了 //不了解JAVA WeakReference的可以上网自己查一下,很简单的 int initResult = native_setup(new WeakReference(this), mStreamType, 这个值是AudioManager.STREAM_MUSIC mSampleRate, 这个值是8000 mChannels, 这个值是2 mAudioFormat,这个值是AudioFormat.ENCODING_PCM_16BIT mNativeBufferSizeInBytes, //这个是刚才getMinBufSize得到的 mDataLoadMode);DataLoadMode是MODE_STREAM .... } 上面函数调用最终进入了JNI层android_media_AudioTrack.cpp下面的函数 static int android_media_AudioTrack_native_setup(JNIEnv *env, jobject thiz, jobject weak_this, jint streamType, jint sampleRateInHertz, jint channels, jint audioFormat, jint buffSizeInBytes, jint memoryMode) { int afSampleRate; int afFrameCount; 下面又要调用一堆东西,烦不烦呐?具体干什么用的,以后分析到AudioSystem再说。 AudioSystem::getOutputFrameCount(&afFrameCount, streamType); AudioSystem::getOutputSamplingRate(&afSampleRate, streamType); AudioSystem::isOutputChannel(channels); popCount是统计一个整数中有多少位为1的算法 int nbChannels = AudioSystem::popCount(channels); if (streamType == javaAudioTrackFields.STREAM_MUSIC) { atStreamType = AudioSystem::MUSIC; } int bytesPerSample = audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1; int format = audioFormat == javaAudioTrackFields.PCM16 ? AudioSystem::PCM_16_BIT : AudioSystem::PCM_8_BIT; int frameCount = buffSizeInBytes / (nbChannels * bytesPerSample); //上面是根据Buffer大小和一个Frame大小来计算帧数的。 // AudioTrackJniStorage,就是一个保存一些数据的地方,这 //里边有一些有用的知识,下面再详细解释 AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage(); jclass clazz = env->

GetObjectClass (thiz); lpJniStorage- > mCallbackData.audioTrack_class = (jclass) env- > NewGlobalRef (clazz); lpJniStorage- > mCallbackData.audioTrack_ref = env- > NewGlobalRef (weak_this); lpJniStorage- > mStreamType = atStreamType; / / create a real AudioTrack object AudioTrack* lpTrack = new AudioTrack () If (memoryMode = = javaAudioTrackFields.MODE_STREAM) {/ / if it is a STREAM stream Set those parameters to lpTrack- > set (atStreamType,// stream type sampleRateInHertz, format,// word length, PCM channels, frameCount, 0 callback / flags audioCallback, & (lpJniStorage- > mCallbackData), / / callback, callback data (user) 0 / / notificationFrames = = 0 since not using EVENT_MORE_DATA to feed the AudioTrack 0gamma / shared memory STREAM mode requires users to write over and over again, so there is no need for shared memory true) / / thread can call Java} else if (memoryMode = = javaAudioTrackFields.MODE_STATIC) {/ / in the case of static mode, the user needs to write the data into it at once, and then / / it is up to audioTrack to read it out. So you need a shared memory / / shared memory here refers to the content shared between C++AudioTrack and AudioFlinger / / because the real playback is done by AudioFlinger. LpJniStorage- > allocSharedMem (buffSizeInBytes); lpTrack- > set (atStreamType,// stream type sampleRateInHertz, format,// word length, PCM channels, frameCount, 0 flags audioCallback / flags audioCallback, & (lpJniStorage- > mCallbackData), / / callback, callback data (user)) 0 notificationFrames = 0 since not using EVENT_MORE_DATA to feed the AudioTrack lpJniStorage- > mMemBase,// shared mem true); / / thread can call Java} if (lpTrack- > initCheck ()! = NO_ERROR) {LOGE ("Error initializing AudioTrack"); goto native_init_failure Again, save the C++AudioTrack object pointer to a variable of the JAVA object / / so that the AudioTrack object of the Native layer is associated with the AudioTrack object of the JAVA layer. Env- > SetIntField (thiz, javaAudioTrackFields.nativeTrackInJavaObj, (int) lpTrack); env- > SetIntField (thiz, javaAudioTrackFields.jniData, (int) lpJniStorage);}

1 AudioTrackJniStorage detailed explanation

This class is actually an auxiliary class, but there is some important knowledge in it, especially the shared memory mechanism encapsulated by Android. Let's explain this together, and by figuring this out, we can easily copy memory between two processes.

The code for AudioTrackJniStorage is simple.

Struct audiotrack_callback_cookie {jclass audioTrack_class; jobject audioTrack_ref;}; cookie actually preserves some things in JAVA, with no special significance. Class AudioTrackJniStorage {public: sp mMemHeap;// these two Memory are very important sp mMemBase; audiotrack_callback_cookie mCallbackData. Int mStreamType; bool allocSharedMem (int sizeInBytes) {mMemHeap = new MemoryHeapBase (sizeInBytes, 0, "AudioTrack HeapBase"); mMemBase = new MemoryBase (mMemHeap, 0, sizeInBytes); / / pay attention to usage, first get a HeapBase, and then pass HeapBase into MemoryBase. Return true;}}

2 MemoryHeapBase

MemroyHeapBase is also a set of classes for memory operations based on the Binder mechanism made by Android. Since it is the Binder mechanism, there must be a server (Bnxxx) and an agent Bpxxx. Look at the MemoryHeapBase definition:

Class MemoryHeapBase: public virtual BnMemoryHeap

{

Sure enough, derived from BnMemoryHeap, that is the BN end. So it hooks up with Binder.

/ / the functions called by the Bp end will eventually be transferred to Bn.

For those who do not understand the Binder mechanism, please refer to:

Http://blog.csdn.net/Innost/archive/2011/01/08/6124685.aspx

There are several constructors, and let's take a look at the ones we use:

MemoryHeapBase::MemoryHeapBase (size_t size, uint32_t flags, char const * name): mFD (- 1), mSize (0), mBase (MAP_FAILED), mFlags (flags), mDevice (0), mNeedUnmap (false) {const size_t pagesize = getpagesize (); size = (size + pagesize-1) & ~ (pagesize-1)) / / create shared memory, ashmem_create_region, which is provided by the system, regardless of whether the / dev/ashmem device is open on the device, and the tmp file int fd = ashmem_create_region (name = = NULL?) is opened on the Host. "MemoryHeapBase": name, size); mapfd (fd, size); / / get a piece of memory by mmap from that fd / / I don't understand. After mapfd is finished, the mBase variable points to the starting location of the memory, mSize is the allocated memory size, and mFd is the file descriptor returned by ashmem_create_region}.

MemoryHeapBase provides the following functions to get the size and location of shared memory.

GetBaseID ()-- > returns mFd. If it is negative, it indicates that the creation of shared memory has failed.

GetBase ()-> returns mBase, memory location

GetSize ()-> returns mSize, memory size

With MemoryHeapBase, we have another MemoryBase, which is another class linked to the Binder mechanism.

Alas, this is probably a convenience class on MemoryHeapBase, right? Because I saw offset.

Then it is estimated that this class is a convenient class that can return the write location (that is, offset) in the current Buffer.

This eliminates the need for users to go around calculating the read and write location.

Class MemoryBase: public BnMemory {public: MemoryBase (const sp& heap, ssize_t offset, size_t size); virtual sp getMemory (ssize_t* offset, size_t* size) const; protected: size_t getSize () const {return mSize;} ssize_t getOffset () const {return mOffset;} const sp& getHeap () const {return mHeap;}

OK, with the above two MemoryXXX, we can guess how to use it.

L BnXXX side assigns BnMemoryHeapBase and BnMemoryBase first

L then pass the BnMemoryBase to BpXXX

L BpXXX can use BpMemoryBase to get the shared memory allocated by the BnXXX side.

Note that since it is shared memory between processes, the BP side must use functions such as memcpy to operate memory, which are not protected by synchronization, and it is impossible for Android to increase synchronization protection for such shared memory within the system. So it seems that there must be a cross-process synchronization protection mechanism when manipulating the shared memory. We will encounter it later when we talk about the actual broadcast.

In addition, the SharedBuffer here will eventually be used on the BP side, that is, AudioFlinger.

3.4Analytical play and write

At this point, the JAVA layer calls play and write. The two functions of the JAVA layer do not have much content, and both go directly to the native layer to work.

First take a look at the JNI function corresponding to the play function.

Static void android_media_AudioTrack_start (JNIEnv * env, jobject thiz) {/ / see, get the saved C++ layer AudioTrack object pointer from the JAVA AudioTrack object / / convert it directly from the int type to the pointer. If ARM becomes a 64-bit platform in the future, let's see how google changes! AudioTrack * lpTrack = (AudioTrack *) env- > GetIntField (thiz, javaAudioTrackFields.nativeTrackInJavaObj); lpTrack- > start (); / / talk about this later}

Here is write. We're writing an short array.

Static jint android_media_AudioTrack_native_write_short (JNIEnv * env, jobject thiz, jshortArray javaAudioData, jint offsetInShorts, jint sizeInShorts, jint javaAudioFormat) {return (android_media_AudioTrack_native_write (env) Thiz, (jbyteArray) javaAudioData, offsetInShorts*2, sizeInShorts*2, javaAudioFormat) / 2) } annoying, if it is encapsulated according to Byte or Short, it will eventually be called to the important function writeToTrack to jint writeToTrack (AudioTrack* pTrack, jint audioFormat, jbyte* data, jint offsetInBytes, jint sizeInBytes) {ssize_t written = 0; / / regular write () or copy the data to the AudioTrack's shared memory? If (pTrack- > sharedBuffer () = 0) {/ / creates streams, so there is no shared memory in track / / remember the set we called in native_setup? AudioTrackJniStorage does not create / / shared memory written = pTrack- > write (data + offsetInBytes, sizeInBytes);} else {if (audioFormat = = javaAudioTrackFields.PCM16) {/ / writing to shared memory, check for capacity if ((size_t) sizeInBytes > pTrack- > sharedBuffer ()-> size ()) {sizeInBytes = pTrack- > sharedBuffer ()-> size () } / / see? In STATIC mode, copy the data directly into the shared memory / / of course, the shared memory is pTrack, which is the memcpy (pTrack- > sharedBuffer ()-> pointer (), data + offsetInBytes, sizeInBytes) that we set to the / / share of AudioTrackJniStorage during set; written = sizeInBytes } else if (audioFormat = = javaAudioTrackFields.PCM8) {PCM8 format should be first converted to PCM16} return written;}

At this point, it seems very simple, ah, AudioTrack of Java layer is nothing more than calling write functions, while the actual data of JNI layer is C++ AudioTrack write. Anyway, there is nothing interesting to see on the JNI floor.

Four AudioTrack (C++ layer)

Following the above, we know that in the JNI layer, there are the following steps:

L new an AudioTrack

L call the set function to pass in AudioTrackJniStorage and other information

L called the start function of AudioTrack

L call the write function of AudioTrack

So, let's take a look at the real C++AudioTrack.

AudioTrack.cpp is located in framework/base/libmedia/AudioTrack.cpp

4.1 new AudioTrack () and set calls

The JNI layer calls the simplest constructor:

AudioTrack::AudioTrack (): mStatus (NO_INIT) / / initializes the state to NO_INIT. Android makes heavy use of state in the design pattern. {}

Next, call set. Let's see what happened to JNI's set.

LpTrack- > set (atStreamType, / / it should be Music. SampleRateInHertz,//8000 format,// should be a callback function in PCM_16 bar channels,// stereo = 2 frameCount,// 0Grample / flags audioCallback, / / JNI & (lpJniStorage- > mCallbackData). / / the parameter of the callback function is 0memery / notification callback function Indicates that AudioTrack needs data, but it is not used for the time being. There is no true in stream mode.) / / callback thread can call JAVA something. Let's take a look at the set function. Status_t AudioTrack::set (int streamType, uint32_t sampleRate, int format, int channels, int frameCount, uint32_t flags, callback_t cbf, void* user, int notificationFrames, const sp& sharedBuffer, bool threadCanCallJava) {

... With all the judgments in front of us, we'll talk about AudioSystem later.

Audio_io_handle_t output = AudioSystem::getOutput ((AudioSystem::stream_type) streamType, sampleRate, format, channels, (AudioSystem::output_flags) flags); / / createTrack? It seems that this is the real working status_t status = createTrack (streamType, sampleRate, format, channelCount, frameCount, flags, sharedBuffer, output); / / cbf is the callback function audioCallback if (cbf! = 0) passed in by JNI {/ / it seems that this thread is going to be created anyway! MAudioTrackThread = new AudioTrackThread (* this, threadCanCallJava);} return NO_ERROR;}

Look at the createTrack who actually works.

Status_t AudioTrack::createTrack (int streamType, uint32_t sampleRate, int format, int channelCount, int frameCount, uint32_t flags, const sp& sharedBuffer, audio_io_handle_t output) {status_t status; / / ah, seems to have something to do with audioFlinger. Const sp& audioFlinger = AudioSystem::get_audio_flinger (); / / the following call will eventually appear in AudioFlinger. Leave it alone for a while. Sp track = audioFlinger- > createTrack (getpid (), streamType, sampleRate, format, channelCount FrameCount, (uint16_t) flags) getCblk () MAudioTrack.clear (); mAudioTrack = track; mCblkMemory.clear (); / / sp's clear, just look at delete XXX mCblkMemory = cblk; mCblk = static_cast (cblk- > pointer ()); mCblk- > out = 1; mFrameCount = mCblk- > frameCount; if (sharedBuffer = = 0) {/ / finally see buffer related. Note that the shared buffer is not passed into the / / STREAM schema here, but the data does need to be hosted by buffer. / / anyway, AudioTrack did not create a buffer, so it can only be the buffer that just got / / from AudioFlinger. MCblk- > buffers = (char*) mCblk + sizeof (audio_track_cblk_t);} return NO_ERROR;}

Remember when we said that MemoryXXX doesn't have a synchronization mechanism, so there should be something here that can reflect synchronization.

So let me tell you, it's in the audio_track_cblk_t structure. Its header file is in

Framework/base/include/private/media/AudioTrackShared.h

The implementation file is in AudioTrack.cpp

Audio_track_cblk_t::audio_track_cblk_t () / / see the SHARED below? It all means sharing across processes. I'm not going to talk about this. / / when I introduce synchronization later, I'll talk about it in detail: lock (Mutex::SHARED), cv (Condition::SHARED), user (0), server (0), userBase (0), serverBase (0), buffers (0), frameCount (0), loopStart (UINT_MAX), loopEnd (UINT_MAX), loopCount (0), volumeLR (0), flowControlFlag (1), forceReady (0) {}

At this point, everyone should have a general panorama.

L AudioTrack gets an IAudioTrack object in AudioFlinger, which contains a very important data structure, audio_track_cblk_t, which includes a buffer address, including some content of inter-process synchronization, and possibly data location, etc.

L AudioTrack starts a thread called AudioTrackThread. What does this thread do? I don't know yet.

L AudioTrack calls the write function, which must have written the data to that shared buffer, and then IAudioTrack receives the data in another process AudioFlinger (actually AudioFlinger is a service that runs in mediaservice) and eventually writes it to the audio device.

So let's see what AudioTrackThread did.

The statement called is:

MAudioTrackThread = new AudioTrackThread (* this, threadCanCallJava)

AudioTrackThread is derived from Thread, which has been discussed in the Binder mechanism.

Anyway, the threadLoop function of AudioTrackAThread will be called eventually.

Take a look at the constructor first.

AudioTrack::AudioTrackThread::AudioTrackThread (AudioTrack& receiver, bool bCanCallJava): Thread (bCanCallJava), mReceiver (receiver) {/ / mReceiver is the AudioTrack object / / bCanCallJava is TRUE}

The startup of this thread is triggered by the start function of AudioTrack.

Void AudioTrack::start () {/ / start function calls AudioTrackThread function to trigger a new thread that executes mAudioTrackThread's threadLoop sp t = mAudioTrackThread; t-> run ("AudioTrackThread", THREAD_PRIORITY_AUDIO_CLIENT); / / makes track in AudioFlinger also start status_t status = mAudioTrack- > start ();} bool AudioTrack::AudioTrackThread::threadLoop () {/ / is disgusting, and calls AudioTrack's processAudioBuffer function return mReceiver.processAudioBuffer (this) } bool AudioTrack::processAudioBuffer (const sp& thread) {Buffer audioBuffer; uint32_t frames; size_t writtenSize;... Callback 1 mCbf (EVENT_UNDERRUN, mUserData, 0);... Callback 2 always sends some information to JNI mCbf (EVENT_BUFFER_END, mUserData, 0); / / Manage loop end callback while (mLoopCount > mCblk- > loopCount) {mCbf (EVENT_LOOP_END, mUserData, (void *) & loopCount);} / / there seems to be something writing data below do {audioBuffer.frameCount = frames. / / get buffer, status_t err = obtainBuffer (& audioBuffer, 1); size_t reqSize = audioBuffer.size; / / call back buffer to JNI, which is a single thread, and we still have upper-level users who keep write there. How did this happen? MCbf (EVENT_MORE_DATA, mUserData, & audioBuffer); audioBuffer.size = writtenSize; frames-= audioBuffer.frameCount; releaseBuffer (& audioBuffer); / / release buffer, corresponding to obtain, it seems that LOCK and UNLOCK operate} while (frames); does return true;} really have two places in write data? It seems that we have to go to mCbf to have a look. The logo is EVENT_MORE_DATA. MCbf is passed into the AudioTrack of C++ from set. The actual function is: static void audioCallback (int event, void* user, void* info) {if (event = = AudioTrack::EVENT_MORE_DATA) {/ / . Great, this function does not write data AudioTrack::Buffer* pBuff = (AudioTrack::Buffer*) info; pBuff- > size = 0;}

From a code point of view, google originally considered asynchronous callback to write data, but it is a pity to find that this method will be more complicated, especially the JAVA AudioTrack that is open to users will be very difficult to deal with, so it has been bypassed secretly.

Great, it seems that only the user's write can actually write data, and this AudioTrackThread has nothing to do with it except for a notice.

Let's look at write.

4.2 write

Ssize_t AudioTrack::write (const void* buffer, size_t userSize)

{

Simple enough, it's obtainBuffer,memcpy data, and then releasBuffer

Squinting can think, obtainBuffer must be Lock memory, releaseBuffer must be unlock memory

Do {audioBuffer.frameCount = userSize/frameSize (); status_t err = obtainBuffer (& audioBuffer,-1); size_t toWrite; toWrite = audioBuffer.size; memcpy (audioBuffer.i8, src, toWrite); src + = toWrite;} userSize-= toWrite; written + = toWrite; releaseBuffer (& audioBuffer);} while (userSize); return written

ObtainBuffer is too complicated, but you only need to know how it works.

Status_t AudioTrack::obtainBuffer (Buffer* audioBuffer, int32_t waitCount) {/ / forgive me for omitting too much, most of which are related to the current data location, uint32_t framesAvail = cblk- > framesAvailable (); cblk- > lock.lock (); / / see, lock result = cblk- > cv.waitRelative (cblk- > lock, milliseconds (waitTimeMs)) / / I find that there are many places that need to judge the status of the remote AudioFlinger, such as whether or not to quit, isn't there a good way to focus on this kind of thing? If (result = = DEAD_OBJECT) {result = createTrack (mStreamType, cblk- > sampleRate, mFormat, mChannelCount, mFrameCount, mFlags, mSharedBuffer,getOutput ());} / / get buffer audioBuffer- > raw = (int8_t *) cblk- > buffer (u); return active? Status_t (NO_ERROR): status_t (STOPPED);} look at releaseBuffer void AudioTrack::releaseBuffer (Buffer* audioBuffer) {audio_track_cblk_t* cblk = mCblk; cblk- > stepUser (audioBuffer- > frameCount);} uint32_t audio_track_cblk_t::stepUser (uint32_t frameCount) {uint32_t u = this- > user; u + = frameCount If (out) {if (bufferTimeoutMs = = MAX_STARTUP_TIMEOUT_MS-1) {bufferTimeoutMs = MAX_RUN_TIMEOUT_MS;}} else if (u > this- > server) {u = this- > server;} if (u > = userBase + this- > frameCount) {userBase + = this- > frameCount;} this- > user = u; flowControlFlag = 0 Return u;}

It's strange that releaseBuffer doesn't have unlock operation. Did I make a mistake?

Check on obtainBuffer again? Why is it so obscure?

It turns out that you will enter lock at some time in obtainBuffer, and it may be unlock at some point. Can't you see that there are synchronization operations such as lock,unlock,wait everywhere in obtainBuffer? That must be the truth. No wonder writing is so complicated. Less-used goto statements are also used.

Alas, is this necessary!

Summary of five AudioTrack

Through this analysis, I think there are the following points:

The working principle of AudioTrack, especially the data transfer, has been analyzed in detail, including shared memory, cross-process synchronization, etc., which can also explain a lot of confusion.

It seems that the most important work is done in AudioFlinger. Through the introduction of AudioTrack, we provide a starting point for further analysis of AudioFlinger.

Working principle and process, say it again, the JAVA layer will look at the first example, there is really nothing to say.

L AudioTrack is new, then set a bunch of information, and call the AudioFlinger on the other side through the Binder mechanism, get the IAudioTrack object, and interact with AudioFlinger through it.

After calling the start function, a thread will be started specifically for callback processing, and there will also be a callback of that kind of data copy in the code, but the callback function of the JNI layer does not actually write data into it. All you have to do is look at write.

L users get write again and again, so AudioTrack is nothing more than memcpy the data into the shared buffer.

Thank you for your reading, the above is the content of "how to use AudioTrack API", after the study of this article, I believe you have a deeper understanding of how to use AudioTrack API, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report