Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the use of AudioFlinger in Android

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

In this article, the editor introduces in detail "what is the use of AudioFlinger in Android", the content is detailed, the steps are clear, and the details are handled properly. I hope that this article "what is the use of AudioFlinger in Android" can help you solve your doubts? let's follow the editor's way of thinking to learn new knowledge.

The birth of AudioFlinger

AF is a service, so I don't need to say any more about this, do I? The code is in

In framework/base/media/mediaserver/Main_mediaServer.cpp. Int main (int argc, char** argv) {sp proc (ProcessState::self ()); sp sm = defaultServiceManager ();.... Instantiation of AudioFlinger::instantiate ();-> AF instantiation AudioPolicyService::instantiate ();-> APS instantiation. ProcessState::self ()-> startThreadPool (); IPCThreadState::self ()-> joinThreadPool ();}

Wow, it seems that this program is a heavy burden. I didn't expect it. Why is AF,APS in the same basket as MediaService and CameraService?

Take a look at the instantiated static function of AF, in framework/base/libs/audioFlinger/audioFlinger.cpp

Void AudioFlinger::instantiate () {defaultServiceManager ()-> addService (/ / add the AF instance to the system service String16 ("media.audio_flinger"), new AudioFlinger ();}

Let's see what its constructor is made of.

AudioFlinger::AudioFlinger (): BnAudioFlinger (), / / initialize the base class mAudioHardware (0), / / HAL objects of audio hardware mMasterVolume (1.0f), mMasterMute (false), mNextThreadId (0) {mHardwareStatus = AUDIO_HW_IDLE; / / create HAL objects representing Audio hardware mAudioHardware = AudioHardwareInterface::create (); mHardwareStatus = AUDIO_HW_INIT If (mAudioHardware- > initCheck () = = NO_ERROR) {setMode (AudioSystem::MODE_NORMAL); / / setting the sound mode of the system is actually setting the hardware mode setMasterVolume (1.0f); setMasterMute (false);}}

There are often functions of setXXX in AF. What exactly is it? Let's look at the setMode function.

Status_t AudioFlinger::setMode (int mode) {mHardwareStatus = AUDIO_HW_SET_MODE; status_t ret = mAudioHardware- > setMode (mode); / / set hardware mode mHardwareStatus = AUDIO_HW_IDLE; return ret;}

Of course, there are other things about setXXX, but basically they all involve hardware objects. Let's ignore it for the time being. Wait until the analysis of the Audio strategy.

Well, when the Android system starts, it looks like AF is ready for hardware. But does creating a hardware object mean we can play it?

2.2 the process of AT calling AF

I'll simply list the process of AT calling AF here, and I'll analyze how AF works in this order.

-- participate in Section 4.1 of AudioTrack Analysis

1. Create

AudioTrack* lpTrack = new AudioTrack (); lpTrack- > set (...)

This will enter the AT of C++. Here is the set function of AT

Audio_io_handle_t output = AudioSystem::getOutput ((AudioSystem::stream_type) streamType, sampleRate, format, channels, (AudioSystem::output_flags) flags); status_t status = createTrack (streamType, sampleRate, format, channelCount, frameCount, flags, sharedBuffer, output);-> creatTrack will deal with AF. Let's take a look at the important createTrack statement const sp& audioFlinger = AudioSystem::get_audio_flinger (); / / it's important to call AF's createTrack to get an IAudioTrack object sp track = audioFlinger- > createTrack (); sp cblk = track- > getCblk (); / / get the management structure of shared memory

To summarize the process of creation, AT calls AF's createTrack to get an IAudioTrack object, and then gets the shared memory object from this object.

2. Start and write

If you look at AT's start, you may be calling IAudioTrack's start, right?

Void AudioTrack::start () {/ / sure enough. Status_t status = mAudioTrack- > start ();}

What about write? As we said earlier, AT is derived from a shared buffer:

L Lock cache

L write cache

L Unlock cache

Note that there is a problem with Lock and Unlock here. What is the problem? We'll talk later.

In this way, then AF must have a thread there, too:

L Lock

L read cache, write hardware

L Unlock

In short, we know AT's process of calling AF. Look at the next one.

2.3 AF proc

1 createTrack

Sp AudioFlinger::createTrack (pid number int streamType,//MUSIC of pid_t pid,//AT, stream type uint32_t sampleRate,//8000 sampling rate int format,//PCM_16 type int channelCount,//2 The number of frames that can be contained in the buffer that needs to be created in the dual channel int frameCount,//, and the shared buffer passed in by const sp& sharedBuffer,//AT. Here is the empty int output,//, which is the index of the corresponding MUSIC stream type obtained from AuidoSystem status_t * status) {sp track Sp trackHandle; sp client; wp wclient; status_t lStatus; {Mutex::Autolock _ l (mLock); / / get the thread according to the output handle? PlaybackThread * thread = checkPlaybackThread_l (output); / / see if this process is already a customer of AF / / explain here, since it is a C AT S architecture, then AF as a server must have a place to store information as C's AT / / so, AF is the only indication based on pid as the client / / mClients is a data organization structure similar to map wclient = mClients.valueFor (pid). If (wclient! = NULL) {} else {/ / if you don't have this customer information, create one and add it to map to client = new Client (this, pid); mClients.add (pid, client) } / / create a track track = thread- > createTrack_l (client, streamType, sampleRate, format, channelCount, frameCount, sharedBuffer, & lStatus) from the thread object just found;} / / Oh, there is also a trackHandle, and the trackHandle object trackHandle = new TrackHandle (track) is returned to the AF side; return trackHandle;}

In this AF function, a lot of new types of data structures suddenly emerge. To tell you the truth, when I first came into contact with it, my brain crashed because I was often exposed to these things. Let's not stick to these things. I'll analyze them one by one.

Go to checkPlaybackThread_l and have a look.

AudioFlinger::PlaybackThread * AudioFlinger::checkPlaybackThread_l (int output) const {PlaybackThread * thread = NULL; / / when you see this kind of indexOfKey, you should immediately think: / / Oh, this may be a map or something. According to key, you can find the actual value if (mPlaybackThreads.indexOfKey (output) > = 0) {thread = (PlaybackThread *) mPlaybackThreads.valueFor (output). Get () } / / this function means to find the corresponding thread return thread;} from a bunch of threads according to the output value.

It's confusing to see here:

There is no thread created in the constructor of l AF, only a HAL object of audio is created

If AT is the first customer of AF, we don't see any place to create threads in the call process just now.

What is an output? Why would you use it as a key to find a thread?

It seems that we have to go to the source of Output.

We know that the source of output is obtained from the set function of AT:

Audio_io_handle_t output = AudioSystem::getOutput ((AudioSystem::stream_type) streamType, / / MUSIC type sampleRate, / / 8000 format, / / PCM_16 channels, / / 2 two channels (AudioSystem::output_flags) flags//0)

The above parameters are no longer prompted. We all know that these values are passed in by AT as a pointcut.

Then it calls AT's own createTrack, eventually passing the output value to AF. Where the audio_io_handle_t type is an int type.

/ / is it handle? It seems that there are very few such terms under linux, is it influenced by MS again?

Let's go into AudioSystem::getOutput and have a look. Notice that this is the first call to the system, and it happened in the AudioTrack process. The location of AudioSystem is in framework/base/media/libmedia/AudioSystem.cpp

Audio_io_handle_t AudioSystem::getOutput (stream_type stream, uint32_t samplingRate, uint32_t format, uint32_t channels, output_flags flags) {audio_io_handle_t output = 0 If ((flags & AudioSystem::OUTPUT_FLAG_DIRECT) = = 0 & & ((stream! = AudioSystem::VOICE_CALL & & stream! = AudioSystem::BLUETOOTH_SCO) | | channels! = AudioSystem::CHANNEL_OUT_MONO | | (samplingRate! = 8000 & & samplingRate! = 16000) {Mutex::Autolock _ l (gLock) / / according to our parameters, we will go into this / / Oh, find the output of stream=music from map again. Unfortunately, this is our first time to enter / / output must be 0 output = AudioSystem::gStreamOutputMap.valueFor (stream);} if (output = = 0) {/ / I am dizzy, then go to AudioPolicyService (APS) / / let it go getOutput const sp& aps = AudioSystem::get_audio_policy_service (); output = aps- > getOutput (stream, samplingRate, format, channels, flags) If ((flags & AudioSystem::OUTPUT_FLAG_DIRECT) = = 0) {Mutex::Autolock _ l (gLock); / / if output is taken, then add output to the map maintained by AudioSystem / / to put it bluntly, is to save some information? To avoid the trouble of harassing APS next time! AudioSystem::gStreamOutputMap.add (stream, output);}} return output;}

What shall I do? Need to go to APS to find information about output?

I can't help it. Just get in there. First we have to look at how APS is created. But as I just said, it is instantiated in that Main_mediaService.cpp together with AF.

Location in framework/base/lib/libaudioflinger/ AudioPolicyService.cpp

AudioPolicyService::AudioPolicyService (): BnAudioPolicyService (), mpPolicyManager (NULL) {/ / the next two threads will say mTonePlaybackThread = new AudioCommandThread (String8 ("")); mAudioCommandThread = new AudioCommandThread (String8 ("ApmCommandThread")); # if (defined GENERIC_AUDIO) | | (defined AUDIO_POLICY_TEST) / / Oh, use the ubiquitous AudioPolicyManager and use your this as a parameter / / Let's take a look at mpPolicyManager = new AudioPolicyManagerBase (this) first. / / use the special AudioPolicyManager / / mpPolicyManager = createAudioPolicyManager (this) provided by the hardware manufacturer;}}

Let's take a look at the constructor of AudioManagerBase, in framework/base/lib/audioFlinger/

In AudioPolicyManagerBase.cpp.

AudioPolicyManagerBase::AudioPolicyManagerBase (AudioPolicyClientInterface * clientInterface): mPhoneState (AudioSystem::MODE_NORMAL), mRingerMode (0), mMusicStopTime (0), mLimitRingtoneVolume (false) {mpClientInterface = clientInterface; this client is APS, AudioOutputDescriptor * outputDesc = new AudioOutputDescriptor () has just been passed in through this; outputDesc- > mDevice = (uint32_t) AudioSystem::DEVICE_OUT_SPEAKER MHardwareOutput = mpClientInterface- > openOutput (& outputDesc- > mDevice, & outputDesc- > mSamplingRate, & outputDesc- > mFormat, & outputDesc- > mChannels, & outputDesc- > mLatency OutputDesc- > mFlags) OpenOutput has been handed over to APS's openOutput to complete, really around. }

Well, it looks like we still have to go back to APS.

Audio_io_handle_t AudioPolicyService::openOutput (uint32_t * pDevices, uint32_t * pSamplingRate, uint32_t * pFormat, uint32_t * pChannels, uint32_t * pLatencyMs) AudioSystem::output_flags flags) {sp af = AudioSystem::get_audio_flinger () / / FT,FT,FT,FT,FT,FT,FT / / after such a big circle, you are back in AudioFlinger? Return af- > openOutput (pDevices, pSamplingRate, (uint32_t *) pFormat, pChannels, pLatencyMs, flags);}

After we are stunned again, let's look back at the footprints:

In AudioTrack, call the set function

L this function will get a handle to output through AudioSystem::getOutput

L AS's getOutput will call AudioPolicyService's getOutput

L and then instead of talking about APS's getOutPut, we went to see what APS created.

L find that when APS is created, an AudioManagerBase is created, and the creation of this AMB calls APS's openOutput.

L APS's openOutput calls AudioFlinger's openOutput again.

There is a question: will the set parameter in AT be the same as the openOutput that was eventually passed into AF during APS construction? If not, what are the parameters of openOutput at construction time?

Let's put aside the suspense and take a look at APS's getOutPut.

Audio_io_handle_t AudioPolicyService::getOutput (AudioSystem::stream_type stream, uint32_t samplingRate, uint32_t format, uint32_t channels AudioSystem::output_flags flags) {Mutex::Autolock _ l (mLock) / / AudioManagerBase does not work, so return mpPolicyManager- > getOutput (stream, samplingRate, format, channels, flags);}

Go in and have a look.

Audio_io_handle_t AudioPolicyManagerBase::getOutput (AudioSystem::stream_type stream, uint32_t samplingRate, uint32_t format, uint32_t channels AudioSystem::output_flags flags) {audio_io_handle_t output = 0 Uint32_t latency = 0; / / open a non direct output output = mHardwareOutput; / / where was this created? When AMB was constructed.. Return output;}

We'll talk about the specific AMB analysis later when we talk about the Audio system strategy. Anyway, at this point, we know that when APS is constructed, an Output will be open, and this Output will call AF's openOutput.

Int AudioFlinger::openOutput (uint32_t * pDevices, uint32_t * pSamplingRate, uint32_t * pFormat, uint32_t * pChannels, uint32_t * pLatencyMs) Uint32_t flags) {status_t status PlaybackThread * thread = NULL; mHardwareStatus = AUDIO_HW_OUTPUT_OPEN; uint32_t samplingRate = pSamplingRate? * pSamplingRate: 0; uint32_t format = pFormat? * pFormat: 0; uint32_t channels = pChannels? * pChannels: 0; uint32_t latency = pLatencyMs? * pLatencyMs: 0; Mutex::Autolock _ l (mLock) / create an AudioStreamOut object AudioStreamOut * output = mAudioHardware- > openOutputStream (* pDevices, (int *) & format, & channels) from the Audio hardware HAL object & samplingRate, & status) MHardwareStatus = AUDIO_HW_IDLE; if (output! = 0) {/ / create a Mixer thread thread = new MixerThread (this, output, + + mNextThreadId);} / / finally found, add this thread to the thread management organization mPlaybackThreads.add (mNextThreadId, thread); return mNextThreadId;}}

I see, it seems that before AT calls AF's createTrack, AF has already created the thread at some point, and it is a thread of Mixer type, which seems to have something to do with audio mixing. This seems to have something to do with the AF job we first envisioned. Lock, read cache, write Audio hardware, Unlock. It's probably all done in this thread.

2 continue createTrack

AudioFlinger::createTrack (pid_t pid, int streamType, uint32_t sampleRate, int format, int channelCount, int frameCount, uint32_t flags, const sp& sharedBuffer, int output, status_t * status) {sp track; sp trackHandle; sp client; wp wclient; status_t lStatus {/ / suppose we find the corresponding thread Mutex::Autolock _ l (mLock); PlaybackThread * thread = checkPlaybackThread_l (output); / / dim, call the thread object's createTrack_l track = thread- > createTrack_l (client, streamType, sampleRate, format, channelCount, frameCount, sharedBuffer, & lStatus);} trackHandle = new TrackHandle (track)

Return trackHandle;---- "notice that this object is eventually returned to the AT process.

It really is. It's too roundabout. Go in and see thread- > createTrack_l again. _ l means that the function has acquired the synchronization lock before it enters.

Follow the sourceinsight ctrl+ left mouse button to enter the following function.

The signature of the following function is so long. Why is that?

It turns out that a large number of inner classes are defined in the C++ class of Android. To be honest, in my experience with C++ in the past few years, I have hardly been exposed to things that use inner classes so frequently. -> of course, you can say that STL is also heavily used.

Let's treat the inner class of C++ as an ordinary class. In fact, I don't think it has any special meaning, just like the outer class, including function calls, public/private and so on. This is very different from the inner class of JAVA.

Sp AudioFlinger::PlaybackThread::createTrack_l (const sp& client, int streamType, uint32_t sampleRate, int format, int channelCount, int frameCount, const sp& sharedBuffer, status_t * status) {sp track; status_t lStatus; {/ / scope for mLock Mutex::Autolock _ l (mLock) / / new A track object / / I'm a little angry. Android is really encapsulated layer by layer, and the name acquisition is also very similar. / / look at this parameter, notice the sharedBuffer, the value should be 0 track = new Track (this, client, streamType, sampleRate, format, channelCount, frameCount, sharedBuffer); mTracks.add (track); / / add this track to the array for administrative purposes. } lStatus = NO_ERROR; return track;}

Seeing the existence of this array, should we be able to think of anything? By this time, there was already:

A MixerThread with an array that holds the track

It seems that no matter how many AudioTrack there are, there will eventually be a track object on the AF side, and all these track objects will be handled by a thread object. -No wonder it's Mixer.

Let's take a look at new Track, we haven't found out where the shared memory was created!

AudioFlinger::PlaybackThread::Track::Track (const wp& thread, const sp& client, int streamType, uint32_t sampleRate, int format, int channelCount, int frameCount, const sp& sharedBuffer): TrackBase (thread, client, sampleRate, format, channelCount, frameCount, 0, sharedBuffer), mMute (false) MSharedBuffer (sharedBuffer), mName (- 1) {/ / mCblk! = NULL? When was it created? / / can only look at the base class TrackBase, still very angry, too much inheritance. If (mCblk! = NULL) {mVolume [0] = 1.0f; mVolume [1] = 1.0f; mStreamType = streamType; mCblk- > frameSize = AudioSystem::isLinearPCM (format)? ChannelCount * sizeof (int16_t): sizeof (int8_t);}}

Let's see what the base class TrackBase is doing.

AudioFlinger::ThreadBase::TrackBase::TrackBase (const wp& thread, const sp& client, uint32_t sampleRate, int format, int channelCount, int frameCount, uint32_t flags, const sp& sharedBuffer): RefBase (), mThread (thread), mClient (client) MCblk (0), mFrameCount (0), mState (IDLE), mClientTid (- 1), mFormat (format), mFlags (flags & ~ SYSTEM_FLAGS_MASK) {size_t size = sizeof (audio_track_cblk_t) Size_t bufferSize = frameCount*channelCount*sizeof (int16_t); if (sharedBuffer = = 0) {size + = bufferSize;}

/ / call the allocate function of client. What is this client? That's what we created in CreateTrack.

That Client, I don't want to talk about it anymore. Anyway, a shared memory will be created here.

MCblkMemory = client- > heap ()-> allocate (size)

There is shared memory, but there is no object audio_track_cblk_t with a synchronization lock in it.

MCblk = static_cast (mCblkMemory- > pointer ())

The following grammar is so strange. What do you mean?

New (mCblk) audio_track_cblk_t ()

/ / ladies and gentlemen, this is the placement new in C++ grammar. What's the use? there is a buffer in parentheses after new, and then

This is followed by a class constructor. By the way, this placement new means to construct an object in this buffer.

In our previous normal new, it was impossible for an object to be created in a specified block of memory. And placement new can.

Isn't that what we want? Create a piece of shared memory and create an object on this piece of memory. In this way, this object can also be shared in two memories? This is awesome. How did you think of that?

/ / clear all buffers mCblk- > frameCount = frameCount; mCblk- > sampleRate = sampleRate; mCblk- > channels = (uint8_t) channelCount;}

Well, to solve a major puzzle, audio_track_cblk_t, an important data structure for cross-process data sharing, is created on a piece of shared memory through placement new.

Going back to AF's CreateTrack, there is a sentence:

TrackHandle = new TrackHandle (track)

Return trackHandle;---- "notice that this object is eventually returned to the AT process.

The construction of trackHandle uses the return value of thread- > createTrack_l.

2.4 how many kinds of objects are there?

People who read this are bound to be crazy by the unusually large number of class types, inner classes, and inheritance relationships. To tell you the truth, it's okay to waste a little effort here or paste a big UML picture. But I'm not used to talking with pictures, because I can't remember them. That's okay. Let's try to explain the present object clearly in the simplest words.

1 AudioFlinger

Class AudioFlinger: public BnAudioFlinger, public IBinder::DeathRecipient

The AudioFlinger class is the class that represents the entire AudioFlinger service, and all other working classes are defined in it as inner classes. You can use it as a shell.

2 Client

Client is the representative of the C-terminal that describes the C-terminal structure of Cmax S, and it is also a counterpart of AT on the AF side. But it's not the BpXXX in the Binder mechanism. Because AF does not use the function of AT.

Class Client: public RefBase {public: sp mAudioFlinger;// represents the shared memory used by each C side of the AudioFlinger sp mMemoryDealer;// on the S side, through which the process id on the C side of pid_t mPid;// is allocated}

3 TrackHandle

Trackhandle is a Binder-based Track obtained by calling the CreateTrack of AF on the AT side.

This TrackHandle is actually an encapsulation of cross-process support for the actual working PlaybackThread::Track.

What does it mean? Originally, PlaybackThread::Track is something that actually works in AF, but in order to support cross-process, we wrapped it with TrackHandle. In this way, the function of calling TrackHandle in AudioTrack is actually accomplished by TrackHandle calling PlaybackThread::Track. It can be thought of as a Proxy mode.

This is one of the reasons why AudioFlinger is so complex!

Class TrackHandle: public android::BnAudioTrack {public: TrackHandle (const sp& track); virtual ~ TrackHandle (); virtual status_t start (); virtual void stop (); virtual void flush (); virtual void mute (bool); virtual void pause () Virtual void setVolume (float left, float right); virtual sp getCblk () const; sp mTrack;}

4 thread class

There are several different types of threads in AF, with corresponding thread types:

L RecordThread:

RecordThread: public ThreadBase, public AudioBufferProvider

The thread used for recording.

L PlaybackThread:

Class PlaybackThread: public ThreadBase

Thread used for playback

L MixerThread

MixerThread: public PlaybackThread

The thread used for mixing, note that he was born from PlaybackThread.

L DirectoutputThread

DirectOutputThread: public PlaybackThread

Output thread directly, we always see judgments like DIRECT_OUTPUT in the code, which seems to have something to do with this thread in the end.

L DuplicatingThread:

DuplicatingThread: public MixerThread

Copy thread? And derived from the remix thread? I don't know what's the use for the moment.

So many threads have a common parent class, ThreadBase, which is a Thread-based class that AF defines separately for the Audio system. -"FT, it's really troublesome.

Let's not talk about ThreadBase, but there are some useful functions encapsulated in it.

Let's take a look at PlayingThread, where inner classes are defined:

Inner class Track of 5 PlayingThread

We know that the Track used for TrackHandle construction is obtained by PlayingThread's createTrack_l.

Class Track: public TrackBase

Dizzy, here comes another TrackBase.

TrackBase is an inner class defined by ThreadBase

Class TrackBase: public AudioBufferProvider, public RefBase

The base class AudioBufferProvider is an encapsulation of Buffer, which is later used in AF to read and share buffers and write data to hardware HAL.

Personal feeling: the above things, in fact, can be completely independent into different files, and then add some notes.

Writing this kind of code, if I were BOSS, it would be very uncomfortable. What's the point? Is there any good?

2.5 AF process continues

Well, finally, TrackHandle is returned in the createTrack in AF. What is the state of the system at this time?

Several Thread in l AF, as we said earlier, were up at some point when AF was started. Let's assume that this thread starts before AT invokes the AF service.

This can be seen in the code:

Void AudioFlinger::PlaybackThread::onFirstRef () {const size_t SIZE = 256; char buffer [SIZE]; snprintf (buffer, SIZE, "PlaybackThread% p", this); / / onFirstRef, which is actually a method of RefBase, will be called when constructing sp / / the following run actually creates a thread and starts executing threadLoop run (buffer, ANDROID_PRIORITY_URGENT_AUDIO);}

Which thread's threadLoop is executed? I remember we looked for threads based on the output handle.

If you look at the implementation of openOutput, the real thread object creation is there.

Nt AudioFlinger::openOutput (uint32_t * pDevices, uint32_t * pSamplingRate, uint32_t * pFormat, uint32_t * pChannels, uint32_t * pLatencyMs) Uint32_t flags) {if ((flags & AudioSystem::OUTPUT_FLAG_DIRECT) | | (format! = AudioSystem::PCM_16_BIT) | | (channels! = AudioSystem::CHANNEL_OUT_STEREO)) {thread = new DirectOutputThread (this, output, + + mNextThreadId) / / if flags does not set the direct output standard, or format is not 16bit, or the number of channels is not 2 stereo / /, then create DirectOutputThread. } else {/ / Unfortunately, we created the most complex MixerThread thread = new MixerThread (this, output, + + mNextThreadId); 1. MixerThread is a very important worker thread, let's take a look at its constructor. AudioFlinger::MixerThread::MixerThread (const sp& audioFlinger, AudioStreamOut* output, int id): PlaybackThread (audioFlinger, output, id), mAudioMixer (0) {mType = PlaybackThread::MIXER; / / mixer object, when the two parameters are passed, the base class ThreadBase is 0 / / this object is very complex, and the final mixing data are generated by it. MAudioMixer = new AudioMixer (mFrameCount, mSampleRate);}

2. AT calls start

At this point, after AT gets the IAudioTrack object, it calls the start function.

Status_t AudioFlinger::TrackHandle::start () {return mTrack- > start ();} / sure enough, I didn't work again and gave it to mTrack. This is the Track object status_t AudioFlinger::PlaybackThread::Track::start () {status_t status = NO_ERROR; sp thread = mThread.promote () obtained by PlayintThread createTrack_l. / / this Thread is the thread object that called createTrack_l. Here is MixerThread if (thread! = 0) {Mutex::Autolock _ l (thread- > mLock); int state = mState; if (mState = = PAUSED) {mState = TrackBase::RESUMING;} else {mState = TrackBase::ACTIVE } / / add yourself to addTrack_l / / strange, when we were watching createTrack_l, we didn't already have a map to save the created track / / Why is there a similar operation here? PlaybackThread * playbackThread = (PlaybackThread *) thread.get (); playbackThread- > addTrack_l (this); return status;}

Look at this addTrack_l function.

Status_t AudioFlinger::PlaybackThread::addTrack_l (const sp& track) {status_t status = ALREADY_EXISTS; / / set retry count for buffer fill track- > mRetryCount = kMaxTrackStartupRetries; if (mActiveTracks.indexOf (track)

< 0) { mActiveTracks.add(track);//啊,原来是加入到活跃Track的数组啊 status = NO_ERROR; } //我靠,有戏啊!看到这个broadcast,一定要想到:恩,在不远处有那么一个线程正 //等着这个CV呢。 mWaitWorkCV.broadcast(); return status; } 让我们想想吧。start是把某个track加入到PlayingThread的活跃Track队列,然后触发一个信号事件。由于这个事件是PlayingThread的内部成员变量,而PlayingThread又创建了一个线程,那么难道是那个线程在等待这个事件吗?这时候有一个活跃track,那个线程应该可以干活了吧? 这个线程是MixerThread。我们去看看它的线程函数threadLoop吧。 bool AudioFlinger::MixerThread::threadLoop() { int16_t* curBuf = mMixBuffer; Vector< sp >

TracksToRemove; while (! exitPending ()) {processConfigEvents (); / / Mixer enters the loop mixerStatus = MIXER_IDLE; {/ / scope for mLock Mutex::Autolock _ l (mLock); const SortedVector

< wp >

& activeTracks = mActiveTracks; / / take the latest active Track array every time / / the following is a preliminary operation to return the status to see if there is any data to get mixerStatus = prepareTracks_l (activeTracks, & tracksToRemove);} / / LIKELY, which is something of GCC, can optimize the compiled code / / regard it as TRUE bar if (LIKELY (mixerStatus = = MIXER_TRACKS_READY)) {/ / mix buffers... / / call the mixer, pass the buf in, and estimate the data after mixing. / / curBuf is the internal buffer of mMixBuffer,PlayingThread, which has been created somewhere. / / the cache is large enough, mAudioMixer- > process (curBuf); sleepTime = 0; standbyTime = systemTime () + kStandbyTimeInNsecs } there is data to be written to the hardware, so you must not be able to sleep if (sleepTime = = 0) {/ / write the cached data to outPut. This mOutput is AudioStreamOut / / created by the object of Audio HAL. When we analyze it later, we will say int bytesWritten = (int) mOutput- > write (curBuf, mixBufferSize); mStandby = false;} else {usleep (sleepTime); / / if there is no data, then take a break. }

3. MixerThread Core

Do you have a new feeling when you come here? Well, by the way, AF's work is so precise that every part is perfectly matched. But for those of us who look at the code, we really don't understand the benefits of doing this-, it goes a little too far.

The two most important functions in the thread loop of MixerThread:

Prepare_l and mAudioMixer- > process, let's look at them one by one. Uint32_t AudioFlinger::MixerThread::prepareTracks_l (const SortedVector

< wp >

& activeTracks, Vector

< sp >

* tracksToRemove) {uint32_t mixerStatus = MIXER_IDLE; / / get the number of active track. Suppose this is the AT we created, then count=1 size_t count= activeTracks.size (); float masterVolume = mMasterVolume; bool masterMute = mMasterMute; for (size_t iTunes 0; icblk (); / / sets the mixer, the currently active track. MAudioMixer- > setActiveTrack (track- > name ()); if (cblk- > framesReady () & & (track- > isReady () | | track- > isStopped ()) & &! track- > isPaused () & &! track- > isTerminated ()) {/ / compute volume for this track / / AT already has write data. So it's bound to come in here. Int16_t left, right; if (track- > isMuted () | | masterMute | | track- > isPausing () | | mstreams [track-> type ()] .mute) {left = right = 0; if (track- > isPausing ()) {track- > setPaused () } / / the volume set by AT is assumed to be non-zero, we need to listen to the sound! / / so follow the else process} else {/ / read original volumes with volume control float typeVolume = mStreamTypes [track-> type ()] .volume; float v = masterVolume * typeVolume; float v_clamped = v * cblk- > volume [0]; if (v_clamped > MAX_GAIN) v_clamped = MAX_GAIN Left = int16_t (v_clamped); v_clamped = v * cblk- > volume [1]; if (v_clamped > MAX_GAIN) v_clamped = MAX_GAIN; right = int16_t (v_clamped) / / calculate the volume} / / Note that the data source is set for the mixer, which is a track. Remember what we said earlier? Track derives mAudioMixer- > setBufferProvider (track) from AudioBufferProvider; mAudioMixer- > enable (AudioMixer::MIXING); int param = AudioMixer::VOLUME; / / sets the left and right volume for this track mAudioMixer- > setParameter (param, AudioMixer::VOLUME0, left); mAudioMixer- > setParameter (param, AudioMixer::VOLUME1, right) MAudioMixer- > setParameter (AudioMixer::TRACK, AudioMixer::FORMAT, track- > format ()); mAudioMixer- > setParameter (AudioMixer::TRACK, AudioMixer::CHANNEL_COUNT, track- > channelCount ()) MAudioMixer- > setParameter (AudioMixer::RESAMPLE, AudioMixer::SAMPLE_RATE, int (cblk- > sampleRate));} else {if (track- > isStopped ()) {track- > reset () } / / if the track has been stopped, add it to the track queue tracksToRemove to be removed / / and stop its mixing if in AudioMixer (track- > isTerminated () | | track- > isStopped () | | track- > isPaused ()) {tracksToRemove- > add (track); mAudioMixer- > disable (AudioMixer::MIXING) } else {mAudioMixer- > disable (AudioMixer::MIXING);} / / remove all the tracks that need to be... Count = tracksToRemove- > size (); return mixerStatus;} do you understand? What is the function of prepare_l? Set the information for the mixer based on the currently active track queue. It is conceivable that a track must have a counterpart in the mixer. We will elaborate later when we analyze the AudioMixer. When you are ready for the mixer, call its process function void AudioMixer::process (void* output) {mState.hook (& mState, output); / / hook? Is it a hook function? }

Dizzy, is it such a simple function?

CTRL+ left button, hook is a function pointer, where is the assignment? Which is the specific implementation function?

I have no choice but to analyze the AudioMixer class.

4. AudioMixer

AudioMixer implementation in framework/base/libs/audioflinger/AudioMixer.cpp

AudioMixer::AudioMixer (size_t frameCount, uint32_t sampleRate): mActiveTrack (0), mTrackNames (0), mSampleRate (sampleRate) {mState.enabledTracks= 0; mState.needsChanged = 0; mState.frameCount = frameCount; mState.outputTemp = 0; mState.resampleTemp = 0; mState.hook = process__nop;//process__nop, which is the static function track_t* t = mState.tracks of this class / / 32-channel audio mixing is supported. Cattle died for (int item0; ineeds = 0; t-> volume [0] = UNITY_GAIN; t-> volume [1] = UNITY_GAIN; t-> volumeInc [0] = 0; t-> volumeInc [1] = 0; t-> channelCount = 2; t-> enabled = 0; t-> format = 16; t-> buffer.raw = 0; t-> bufferProvider = 0) T-> hook = 0; t-> resampler = 0; t-> sampleRate = mSampleRate; t-> in = 0; tasking;}}

/ / where mState is a data structure defined in AudioMixer.h

/ / notice that source insight cannot parse this mState because.... See the notes below.

Struct state_t {uint32_t enabledTracks; uint32_t needsChanged; size_t frameCount; mix_t hook; int32_t * outputTemp; int32_t * resampleTemp; int32_t reserved [2]; track_t tracks [32] / / _ attribute__ ((aligned (32); "--comment this out / / otherwise source insight will not be able to parse this state_t type}

Int mActiveTrack

Uint32_t mTrackNames;//names? It's like a string, but it's actually an int.

Const uint32_t mSampleRate

State_t mState

All right, it's nothing. The optional function implementations for hook are:

Process__validate

Process__nop

Process__genericNoResampling

Process__genericResampling

Process__OneTrack16BitsStereoNoResampling

Process__TwoTracks16BitsStereoNoResampling

When AudioMixer is constructed, hook is process__nop, and there are several places that change the direction of the function pointer.

This part involves digital audio technology, so I can't explain it. Let's look at the nearest function.

Process__OneTrack16BitsStereoNoResampling

Void AudioMixer::process__OneTrack16BitsStereoNoResampling (state_t* state, void* output)

{

Single track,16bit dual channel, no resampling is required, which is mostly the case.

Const int i = 31-_ builtin_clz (state- > enabledTracks); const track_t& t = state- > tracks [I]; AudioBufferProvider::Buffer& b (t.buffer); int32_t* out = static_cast (output); size_t numFrames = state- > frameCount; const int16_t vl = t.volume [0]; const int16_t vr = t.volume [1]; const uint32_t vrl = t.volumeRL While (numFrames) {b.frameCount = numFrames; / / get buffer t.bufferProvider-> getNextBuffer (& b); int16_t const * in = b.i16; size_t outFrames = b.frameCount; if UNLIKELY--- > don't go here. Else {do {/ / calculate data such as volume, which is related to digital audio technology. Uint32_t rl = * reinterpret_cast (in); in + = 2; int32_t l = mulRL (1, rl, vrl) > > 12; int32_t r = mulRL (0, rl, vrl) > > 12; * out++ = (rcblk (); uint32_t framesReady; uint32_t framesReq = buffer- > frameCount) / / , see if the data is ready, framesReady = cblk- > framesReady (); if (LIKELY (framesReady)) {uint32_t s = cblk- > server; uint32_t bufferEnd = cblk- > serverBase + cblk- > frameCount; bufferEnd = (cblk- > loopEnd)

< bufferEnd) ? cblk->

LoopEnd: bufferEnd; if (framesReq > framesReady) {framesReq = framesReady;} if (s + framesReq > bufferEnd) {framesReq = bufferEnd-s;} get the real data address buffer- > raw = getBuffer (s, framesReq); if (buffer- > raw = = 0) goto getNextBuffer_exit; buffer- > frameCount = framesReq Return NO_ERROR;} getNextBuffer_exit: buffer- > raw = 0; buffer- > frameCount = 0; return NOT_ENOUGH_DATA;} look at where the buffer is released: releaseBuffer, which implements void AudioFlinger::ThreadBase::TrackBase::releaseBuffer (AudioBufferProvider::Buffer* buffer) {buffer- > raw = 0; mFrameCount = buffer- > frameCount; step (); buffer- > frameCount = 0;} directly in ThreadBase

Look at step. MFrameCount says I've used up so many frames.

Bool AudioFlinger::ThreadBase::TrackBase::step () {bool result; audio_track_cblk_t* cblk = this- > cblk (); result = cblk- > stepServer (mFrameCount); / / Hem, call the stepServer of cblk to update the server location return result;}

At this point, everyone should understand. It turns out that the data of write in AudioTrack is finally used like this!

Well, it's not fun to watch a process__OneTrack16BitsStereoNoResampling. Take a look.

Process__TwoTracks16BitsStereoNoResampling .

Void AudioMixer::process__TwoTracks16BitsStereoNoResampling (state_t* state, void* output) int I; uint32_t en = state- > enabledTracks; I = 31-_ builtin_clz (en); const track_t& t0 = state- > tracks [I]; AudioBufferProvider::Buffer& b0 (t0.buffer); en & = ~ (1frameCount; int16_t const * buff = NULL While (numFrames) {if (frameCount0 = = 0) {b0.frameCount = numFrames; t0.bufferProvider-> getNextBuffer (& b0); if (b0.i16 = = NULL) {if (buff = = NULL) {buff = new int16_ t [Max _ NUM_CHANNELS * state- > frameCount] } in0 = buff; b0.frameCount = numFrames;} else {in0 = b0.i16;} frameCount0 = b0.frameCount;} if (frameCount1 = = 0) {b1.frameCount = numFrames T1. BufferProvider-> getNextBuffer (& b1); if (b1.i16 = = NULL) {if (buff = = NULL) {buff = new int16_ t [Max _ NUM_CHANNELS * state- > frameCount];} in1 = buff; b1.frameCount = numFrames } else {in1 = b1.i16;} frameCount1 = b1.frameCount;} size_t outFrames = frameCount0

< frameCount1?frameCount0:frameCount1; numFrames -= outFrames; frameCount0 -= outFrames; frameCount1 -= outFrames; do { int32_t l0 = *in0++; int32_t r0 = *in0++; l0 = mul(l0, vl0); r0 = mul(r0, vr0); int32_t l = *in1++; int32_t r = *in1++; l = mulAdd(l, vl1, l0) >

> 12; r = mulAdd (r, vr1, R0) > > 12; / / clamping... L = clamp16 (l); r = clamp16 (r); * out++ = (rreleaseBuffer (& b1);}} if (buff! = NULL) {delete [] buff;}}

You don't understand, do you? Haha, just know that there is such a thing, specializing in digital audio needs to be carefully studied!

Third discussion on sharing audio_track_cblk_t

Why are we talking about this again? Because I looked for it on the Internet, some people say that audio_track_cblk_t is a circular buffer, what does a circular buffer mean? Check for yourself!

This has something to do with my previous work experience. Some BOSS tried so hard to build a bull-breaking circular buffer that I was exhausted. Now audio_track_cblk_t is a circular buffer? I'd like to see how it works.

By the way, we have to explain that the use of audio_track_cblk_t is not quite the same as what I said earlier about Lock, read / write, Unlock. What for?

First of all, because we don't see wait,MixThread with buffered buffer in the AF code, it will only usleep when there is no data.

Second, if there are multiple track and multiple audio_track_cblk_t, and if the method of wait signal is adopted again, which one should we wait for because the pthread library lacks the mechanism of WaitForMultiObjects? This problem is an important problem when we are working on a cross-platform synchronization library.

1. The use of the writer

Let's focus on the class audio_track_cblk_t to see how the writer uses it. The writer is AudioTrack, which in this class is called user

L framesAvailable to see if there is any free space

L buffer to get the starting address of the write space

L stepUser, update the location of user.

two。 The use of readers

The reader is the AF end, add server to this class.

L framesReady, get a readable location

L stepServer, update the reader's location

Look at the definition of this class:

Struct audio_track_cblk_t {Mutex lock; / / synchronization lock Condition cv;//CV volatile uint32_t user;// writer volatile uint32_t server;// reader uint32_t userBase;// writer start position uint32_t serverBase;// reader start position void* buffers Uint32_t frameCount; / / Cache line boundary uint32_t loopStart; / / Loop start uint32_t loopEnd; / / Loop end int loopCount; uint8_t out; / / if it is Track, out is 1, indicating output. }

Note that this is volatile, a cross-process object. It seems that this volatile can also be cross-process.

Alas, it's time to play again. Volatile simply tells the compiler that the address of the unit should not be cache into the CPU buffer. That is, every time you take a value, you have to read it in the actual memory, and you may have to lock the bus when you read the memory. Prevent other CPU cores from modifying at the same time. Because the memory is shared across processes, both processes can be seen in this block, and the bus is locked, and it is the same piece of memory. Of course, volatile ensures synchronization consistency.

The values of loopStart and loopEnd indicate the beginning and end of the loop. Is there another loopCount below that represents the number of times the loop is played?

Then analyze it.

Let's first look at the functions of the writer.

4Writer analysis

First use frameavail to see how much space is left, we can assume that it is the first time to come in. The reader is still there, sleep.

Uint32_t audio_track_cblk_t::framesAvailable () {Mutex::Autolock _ l (lock); return framesAvailable_l ();} int32_t audio_track_cblk_t::framesAvailable_l () {uint32_t u = this- > user; current writer position, which is also 0 uint32_t s = this- > server / / current reader location, which is 0 if (out) {out is 1 uint32_t limit = (s)

< loopStart) ? s : loopStart; 我们不设循环播放时间吗。所以loopStart是初始值INT_MAX,所以limit=0 return limit + frameCount - u; //返回0+frameCount-0,也就是全缓冲最大的空间。假设frameCount=1024帧 } } 然后调用buffer获得其实位置,buffer就是得到一个地址位置。 void* audio_track_cblk_t::buffer(uint32_t offset) const { return (int8_t *)this->

Buffers + (offset-userBase) * this- > frameSize

}

That's it. We update the writer and call stepUser.

Uint32_t audio_track_cblk_t::stepUser (uint32_t frameCount)

{

/ / framecount, indicating how much I have written. Suppose I write 512 frames this time.

The location of uint32_t u = this- > user;//user has not been updated yet.

U + = frameCount;//u updated, Upright 512

/ / Ensure that user is never ahead of server for AudioRecord

If (out) {

/ / nothing. Calculate the waiting time.

}

/ / the initial value of userBase is still 0. Unfortunately, we only wrote half of 1024.

/ / so userBase can't add it.

If (u > = userBase + this- > frameCount) {

UserBase + = this- > frameCount

/ / but this sentence is very important, and userBase has been updated. According to the implementation of the buffer function, it seems that this

/ / the ring buffer is straight. continuous.

}

This- > user = ubank / Oh, the user location has also been updated to 512, but useBase is still 0

Return u

}

Well, suppose the writer is sleep at this time and the reader is up.

5Reader analysis

Uint32_t audio_track_cblk_t::framesReady () {uint32_t u = this- > user; / / u is 512 uint32_t s = this- > server;// has not been read yet, s is zero if (out) {if (u)

< loopEnd) { return u - s;//loopEnd也是INT_MAX,所以这里返回512,表示有512帧可读了 } else { Mutex::Autolock _l(lock); if (loopCount >

= 0) {return (loopEnd-loopStart) * loopCount + u-s;} else {return UINT_MAX;} else {return s-u;}}

Use it up, and then stepServer

Bool audio_track_cblk_t::stepServer (uint32_t frameCount) {status_t err; err = lock.tryLock (); uint32_t s = this- > server; s + = frameCount; / / has read 512 frames, so the if (out) {}

It's not set to play in a loop, so I won't take this.

If (s > = loopEnd) {s = loopStart; if (--loopCount = = 0) {loopEnd = UINT_MAX; loopStart = UINT_MAX;}}

/ / same, straighten out the ring buffer

If (s > = serverBase + this- > frameCount) {serverBase + = this- > frameCount;} this- > server = s; / / server is 512 cv.signal (); / / the reader has finished reading. Trigger the underwriter. Lock.unlock (); return true

6 is it really a circular buffer?

Ring buffering is such a scene that the buffer now has 1024 frames.

Suppose:

L the writer writes to frame 1024 first.

L the reader reads to frame 512

Then, the writer can also write 512 frames from scratch.

So, we have to look back and see if frameavail counts these 512 frames.

Uint32_t audio_track_cblk_t::framesAvailable_l () {uint32_t u = this- > user; / / 1024 uint32_t s = this- > server;//512 if (out) {uint32_t limit = (s)

< loopStart) ? s : loopStart; return limit + frameCount - u;返回512,用上了! } } 再看看stepUser这句话 if (u >

= userBase + this- > frameCount) {u for 1024, UserBase for 0, frameCount for 1024 userBase + = this- > frameCount;// good, userBase also for 1024} look at buffer

Return (int8_t *) this- > buffers + (offset-userBase) * this- > frameSize.

After reading this, the article "what is the use of AudioFlinger in Android" has been introduced. If you want to master the knowledge of this article, you still need to practice and use it yourself to understand it. If you want to know more about related articles, welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report