In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
Today, I will talk to you about the changes in the Android Audio system, which may not be well understood by many people. in order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
Let's start with the Java layer AudioTrack class.
A description of AudioTrack Java class change
In terms of the number of channels, there used to be only mono (MONO) and stereo (STEREO), but now it has expanded to the most NB8 channels (7.1HiFi). The parameter name is CHANNEL_OUT_7POINT1_SURROUND. Seeing this parameter, my jaw fell off with a bang. I can't figure out what this thing is for a while. The losers who know it might as well tell you. Of course, the final output is still two channels. Downmixer processing will be used when multi-channel (greater than 2) (next transform processing, students can search it)
There are other changes, too, but not much. Let me pick some eye-catching ones here. BTW, don't worry, it won't just let everyone see the big nostrils like that Zelola debut film.
Description of the change of AudioTrack JNI layer
This layer includes the JNI layer and the AudioTrack itself
The JNI layer does not change much.
The core code of Audio Native has been moved to framework/av. Yeah, you read it right. It's really av. This is a relatively big change in JB Audio. The Audio Native core code has all been moved to the frameworks/AV directory.
AudioTrack adds a variable to control the scheduling priority of the processes that use it (I was wrong earlier, but I did set the nicer value here). If it is played, the process scheduling priority will be set to ANDROID_PRIORITY_AUDIO. Just like you must murmur when you see a mosaic. I also want to say a few words here. In the case of a single-core CPU, it is foolish to set the priority (the value of ANDROID_PRIORITY_AUDIO is-16, the priority is very high, and the single core sets such a high monster, I don't know how other app can play. If you don't know what I'm talking about, read this article first, http://blog.csdn.net/innost/article/details/6940136). But now 2 cores and 4 cores are already quite common, so you can play with scheduling things here. The real test for loser programmers is: multi-core parallel programming, the principle of linux os, which needs to be mastered by all losers. Audio can no longer be easily ravaged by you. In addition, low-end mobile phones, please do not transplant 4.1, this is really not low-end can play.
AudioTrack has been promoted to a father. JB defines an inexplicable TimedAudioTrack subclass for it. This class is used in codec aah_rtp (I don't know what aah is yet). From an annotated point of view, this class is an audio output interface with a timestamp (with a timestamp, it can be synchronized). If you understand it in detail, you need to analyze it according to the specific application scenarios (mainly rtp). Students who do coding and decoding, hold on tight!
Another super-complex change is that Audio defines several output flag (see the audio_output_flags_t enumeration definition of audio.h). According to the comment, this value has two functions. One is that users of AT can specify what kind of outputDevice they want to use. The other is that device manufacturers can declare their supported output devices (it seems that parameter reading and configuration are added when the device is initialized). However, from the definition of the enumeration, I don't see what it has to do with hardware. It defines the following values:
Typedef enum {
AUDIO_OUTPUT_FLAG_NONE = 0x0, / / no attributes
AUDIO_OUTPUT_FLAG_DIRECT = 0x1, / / this output directly connects a track
/ / to one output stream: no software mixer
AUDIO_OUTPUT_FLAG_PRIMARY = 0x2, / / this output is the primary output of
/ / the device. It is unique and must be
/ / present. It is opened by default and
/ / receives routing, audio mode and volume
/ / controls related to voice calls.
AUDIO_OUTPUT_FLAG_FAST = 0x4, / / output supports "fast tracks", "= = what is fast track? It's so hard to understand! Currently, only the first flag is used for audiotrack in the java layer.
/ / defined elsewhere
AUDIO_OUTPUT_FLAG_DEEP_BUFFER = 0x8 / / use deep audio buffers "= = what is deep buffer? Isn't this mosaic a little too big? I can't see clearly now?!
} audio_output_flags_t
Other changes in AudioTrack are not significant. AudioTrack.cpp is only over 1600 lines in total, so easy!
OK, there are several mosaics on it. I usually get rid of it when I watch Japanese blockbusters, but I can't analyze Audio. Pin your hopes of going to mosaics on the next step of AudioFlinger analysis.
Three AudioFlinger changes explain
We will introduce the changes according to the main flow of AF's work:
AF creation, including its onFirstRef function
OpenOutput function and the creation of MixerThread object
AudioTrack calls the createTrack function
AudioTrack calls the start function
AF remix, then output
3.1 AF creation and onFirstRef
Well, it hasn't changed much. There are three points:
Now there is more detailed control over the volume of Primary devices. For example, some devices can set volume and some cannot, so a master_volume_support (AudioFlinger.h) enumeration is defined to determine the volume control ability of Primary devices.
The standby time of the previous playback process (which is used to save power) is written to death, and can now be controlled by ro.audio.flinger_standbytime_ms. If this property is not available, the default is 3 seconds. AF also adds other variable controls, such as a gScreenState variable that indicates whether the screen is on or off. It can be controlled by AudioSystem::setParameters. In addition, a mBtNrecIsOff variable related to Bluetooth SCO is defined, which is used to control Bluetooth SCO (used for recording, a technical term on Bluetooth is called NREC. I don't know what it is, but let me know with someone who understands it.) AEC and NS special effects are prohibited. Please refer to AudioParameter.cpp
3.2 openOutput function
The openOutput function is more critical, in which you will see your old friend MixerThread,AudioStreamOutput and so on. The whole process includes loading Audio-related hardware so. This part of the work has been available since 4.0, not to mention many changes. But things have changed, and great changes have taken place in old friends. Let's take a look at the MixerThread family.
Fig. 1 PlaybackThread family
Figure 1 gives a little explanation:
ThreadBase is derived from Thread, so it runs in a separate thread (in a nutshell, threads have nothing to do with objects, so code farmers who don't understand multithreaded programming must learn multithreading carefully). It defines an enumeration type_t that represents the types of subclasses, including MIXER,DIRECT,RECORD,DUPLICATING, and so on. This should be easier, right?
ThreadBase's inner class TrackBase is derived from ExtendedAudioBufferProvider, which should be newly added. TrackBase, it would be nice for everyone to understand it as a Buffer Container.
The inner class PMDeathRecipient of ThreadBase is used to listen for the death message of PowerManagerService. This design is a bit funny, because PMS runs in SystemService, and PMS will only hang up if SS is dead. If SS dies, mediaserver will be killed by the rules of init.rc, so AudioFlinger will die. Since we all died together, it was very fast. So, what is the significance of setting up this PMDeathRecipient?
Let's take a look at an important subclass of ThreadBase, PlaybackThread, which should have had too much plastic surgery.
It defines an enumerated mixer_state, which is used to reflect the current state of mixing, including MIXER_IDLE,MIXER_READY and MIXER_ENABLED.
Several virtual functions are defined, which need to be implemented by subclasses, including threadLoop_mix,prepareTracks_l, etc. The abstraction of these functions can still be done. But the change is too big to guard against.
The Track class is added to derive from VolumeProvider, which VP is used to control the volume. According to the previous introduction, volume management in JB is more meticulous than before.
A new definition of TimedTrack is added. The role of this class is related to the rtp aah mentioned earlier. When the students finish learning this article, they can carry out corresponding research and start the war of annihilation!
Next, look at figure 2.
Fig. 2 MixerThread and his brothers
Figure 2, a brief introduction:
MixerThread is derived from PlaybackThread, and this relationship will not change from beginning to end, and it is believed that it will never be.
The biggest changes in MT are several of the important member variables. You must know the AudioMixer, which is used for mixing.
Add a new Soaker object (controlled by the compiled macro), which is a thread. The prefix soak of this word is the most apt explanation in the webster dictionary (I believe that those of us who lived with GRE in those years knew what webster is) is to cause to pay an exorbitant amount. Still don't quite understand what it is? Take another look at the code. It turns out that soaker is a full-time thread that plays with CPU. Its job is to keep doing calculations to drive up CPU usage. It should exist to test the efficiency of the new AF framework on multi-core CPU and so on. So, low-end smartphones, don't play JB.
Another piece of hard evidence that low-end smartphones can't play with JB is that we see a new FastMixer in MT, which is also a thread. All right? In JB, on multi-core smartphones, audio mixing can be done on the thread where the FastMixer is located, and of course it will be faster and more efficient.
The workflow of FastMixer is complex and involves multi-thread synchronization. So, a FastMixerStateQueue is defined here, which is obtained by typedef StateQueue. First of all, it is a StateQueue (simply think of it as an array). Its array element is of type FastMixerState. A StateQueue holds four FasetMixerState members through the mStats variable.
FasetMixerState is similar to a state machine, with an enum Command that controls the state. FastMixerState contains an octet FastTracks array. FastTrack is a functional class used to complete FastMixer.
Each FastTrack has a mBufferProvider, and the member type is SourceAudioBufferProvider.
The above is already quite complicated, so let's take a look at other things encountered in the creation of MixerThread objects:
3.3 MixerThread creation
Through figures 1 and 2, you should have an understanding of several key members of AF. Unfortunately, there is also a mOutputSink member in the above MixerThread, don't you see? It has a lot to do with the NBAIO (Non-block Audio Imax O) that we mentioned earlier. NBAIO exists to achieve non-blocking audio input and output operations. Here are the comments for this class:
NBAIO comments:
/ / This header file has the abstract interfaces only. Concrete implementation classes are declared
/ / elsewhere. Implementations _ should_ be non-blocking for all methods, especially read () and
/ / write (), but this is not enforced. In general, implementations do not need to be multi-thread
/ / safe, and any exceptions are noted in the particular implementation.
NBAIO simply defines an interface and needs to implement a specific implementation class. Of course, it requires that the read/write function is non-blocking, and whether the real implementation is blocking or not is controlled by the implementer.
Personally, I feel that this part of the framework is not yet fully mature, but the introduction of NBIO requires students to be careful, relatively speaking, it is also relatively difficult. Let's take a look at some of the contents of NBAIO in figure 3.
Figure 3 NBAIO-related content
Figure 3 is explained as follows:
NBAIO consists of three main classes, one is NBAIO_Port, which represents the Imax O endpoint, in which a negotiate function is defined for parameter coordination between the caller and the Imax O endpoint. Note that the parameters are not set for the Istroke O endpoint. Because of the hardware-related endpoints, some parameters of the hardware cannot be changed at will like the software. For example, the hardware only supports the sampling rate of the most 44.1KHZ, while the caller passes the sampling rate of 48KHz, which directly requires a negotiation and matching process. This function is more difficult to use, mainly because there are more rules. Students can refer to its notes.
NBAIO_Sink corresponds to the output endpoint, which defines the write and writeVia functions. The writeVia function needs to pass a callback function via, which will be called internally to obtain data. Two push / pull modes similar to data.
NBAIO_Source corresponds to the input endpoint, which defines the read and readVia functions. The meaning is the same as NBAIO_Sink.
Define a MonoPipe and MonoPipeReader. Pipe is the pipeline, and MonoPipe has nothing to do with the IPC communication Pipe in LINUX, but borrows this pipeline concept and idea. MonoPipe only supports Pipe for a single reader (in AF, it is MonoPipeReader). These two Pipe represent the Output and Input endpoints of Audio.
MOutputSink points to AudioStreamOutSink in MT, and this class is derived from NBAIO_Sink and is used for the output of normal mixer. MPipeSink points to MonoPipe and is intended for FastMixer. In addition, there is a variable mNormalSink, which points to mPipeSink, or mOutputSink, depending on the case of FastMixer. The logic of this control is as follows:
Switch (kUseFastMixer) {/ / kUseFastMixer is used to control the usage of FastMixer. There are 4 types:
Case FastMixer_Never: / / never use FastMixer, this option is used for debugging, that is, when FastMixer is turned off
Case FastMixer_Dynamic: / / use it dynamically according to the situation. According to the notes, this function does not seem to have been fully implemented.
MNormalSink = mOutputSink
Break
Case FastMixer_Always: / / always use FastMixer for debugging
MNormalSink = mPipeSink
Break
Case FastMixer_Static:// static. This is the default. However, whether or not to use mPipeSink will be controlled by initFastMixer.
MNormalSink = initFastMixer? MPipeSink: mOutputSink
Break
}
As mentioned above, kUseFastMixer defaults to FastMixer_Static, but whether mNormalSink points to mPipeSink is also controlled by initFastMixer. This variable itself has mFrameCount and
The size of mNormalFrameCount determines that initFastMixer is true only if mFrameCount is less than mNormalFrameCount. Dizzy. These two frameCount are created by PlaybackThread.
ReadOutputParameters gets. Please study this code by yourselves. It's just some simple calculations. If you want to figure it out, you'd better go in with the parameters and work out all the values.
Well, that's the end of the analysis of the creation of MixerThread, so it's best to study this code more. Know what a few brothers do.
3.4 createTrack and start description
The biggest change in createTrack is the new handling of the MediaSyncEvent synchronization mechanism. The purpose of MediaSyncEvent is very simple, and its Java API is explained as follows: startRecording (MediaSyncEvent) is used to start capture only when the playback on a particular audio session is complete. The audio session ID is retrieved from a player (e.g MediaPlayer, AudioTrack or ToneGenerator) by use of the getAudioSessionId () method. To put it simply, you have to wait for a player to finish working before you can start the next playback or recording. This mechanism solves the problem that the voices of Android have been mixed for a long time. (at present, a disgusting but effective way is to add a sleep to stagger the problem of out of sync of multiple player. ). Note that there is no such problem on iPhone.
In addition, the potential benefit of this mechanism is that it liberates the students who do AudioPolicy AudioRoute work. It seems (personally, it can solve this problem) that they no longer have to think about how much time sleep takes and where to add sleep.
In AF, the representative of the MediaSyncEvent mechanism is SyncEvent. Just see for yourselves.
The start function does not change much, with the addition of SyncEvent processing.
In addition, FastMixer and TimedTrack processing are also involved in createTrack. The core is in the createTrack_l and Track constructors of PlaybackThread. Especially with FastMixer.
According to figure 2, the internal data structure of FastMixer FM is FastTrack, while MT uses Track, so there is an one-to-one correspondence here. The FastTrack of FM is saved in an array, so
The Track that uses FM will point to this FastTrack through mFastIndex.
Now you just need to figure out the relationship between FastTrack and Track, and the subsequent data flow needs to be discussed in detail.
Let's take a look at the workflow of MixerThread. This part is the highlight!
Workflow of 3.5 MixerThread
This part of the difficulty lies in how FastMixer works. But here to tell you in advance: this function is not finished yet, there is a pile of FIXME... in the code. But losers, don't happen too soon.
It is estimated that the next version will be ready soon, soon and must be needed. If you look at this immature thing now, you can relieve the psychological pressure of seeing mature things later.
MT is a thread whose work is mainly done in threadLoop, and this function is defined by its base class PlaybackThread, which changes roughly as follows:
PlaybackThread's threadLoop defines the general flow of audio processing, and the details are implemented by several virtual functions (such as prepareTracks_l,threadLoop_mix,threadLoop_write) to subclasses.
The first big change in MT is prepareTracks_l, and the first thing to deal with is the Track of FastMix type, and the criterion is whether the Track has set the TRACK_ fast flag (cool, this flag is not used anywhere in JB yet). This part of the judgment is more complicated. First, FastMixer maintains a state machine, and in addition, the FastMixer runs in its own thread, so thread synchronization is necessary. The state is used here to control the workflow of the FastMixer. Because multithreading is involved, the underrun,overrun state of the audio (don't you know what it is? Look at the reference books mentioned earlier! ) is also a thorny problem to be dealt with. In addition, a MT is with an AudioMixer object, which will complete the data mixing, down transformation and other super-difficult, digital audio processing and other aspects of work. That is, for remixes, the previous prepare work is still done by the MT thread because it can be managed uniformly (some Track do not need to use FastMixer. But when you think about it, everyone wants to deal with it as soon as possible. On multicore CPU, leaving audio mixing to multiple threads is a good example of making full use of CPU resources, which should be the trend of Android evolution in the future. So, I guess this JB hasn't fully grown up yet. Losers who are interested in FastMixer, be sure to study the prepareTracks_l function carefully.
The next important function of MT is threadLoop_mix. Since there is a TimedTrack class, the process function of AudioMixer carries a timestamp, PTS,presentation timestamp. From a codec point of view, there is also a DTS,Decode timestamp. Here we are going to talk about the difference between PTS and DTS. DTS is the decoding time, but it is possible to encode the current frame according to the future frame when encoding. Therefore, when decoding, it will first solve the future frame, and then solve the current frame, but. When you play it, you can't play future frames first. You can only honestly play the current frame first and then the future frame in the playback order (although the future frame is solved first). About PTS/DTS, ask the losers to study the knowledge related to IBP. Back in MT, this PTS is taken from the hardware hal object and should be a timestamp maintained internally by HAL. This timestamp will be more accurate in principle.
After the remix, do the special effects (similar to the previous version), and then call threadLoop_write. The output endpoint of MT's threadLoop_write function is the previous tricky mNormalSink, and if it's not empty, call its write function. Think of calling the non-blocking write function of NBAIO_Sink. According to the analysis in figure 2, it could be that MonoPipe, or it could be AudioStreamOutputSink, and this sink node uses the previous AudioStreamOutput. The inner part of MonoPipe's write is a buffer. It has nothing to do with the real AUDIO HAL Output. This. What's the deal? Make bold assumptions and verify them carefully. Only FastMixer can take the buffer out and write it into the real Audio HAL. Because in the MixerThread constructor, mOutputSink was saved for FastTrack, which is used to contact AudioStreamOutput)
In addition, DulicatingThread,DirectOuptutThread hasn't changed much.
Simple explanation of the working principle of four FastMixer
I used to think that the mixing was done by both the FastMixer thread and the Mixer thread, but the output was still done in MixerThread. Judging from the above MonoPipe analysis, this judgment may not be accurate.
It is possible that the output work is also left to FastMixer, while MixerThread does only part of the mixing work, and then passes the data to the FastMixer thread through MonoPipe. The FastMixer thread remixes the mixing result of its FastTrack and the mixing result of MT, and then outputs it by FastMixer.
FM is defined in FastMixer.cpp, and the core is a ThreadLoop. Since the preparatory work of all Track in AF is done by the MT thread, the threadLoop of FM basically does the corresponding processing according to the state.
The synchronization here uses the very low-level futex (Fast Userspace Mutex) in LINUX. Damn, futex is the foundation of POSIX Mutex implementation. I don't know why people who write this code don't just use Mutex (it's probably still a matter of efficiency, but damn it, how much less efficient is it with Mutex? The code is written for people, it's too B4 for us. Play multithreading to this point, admire it! Losers who do not understand multithreaded programming, please study Posix MultiThread Programming carefully
FastMixer also uses an AudioMixer internally for its remix.
Then go out on write.
Here is the brief description and details of FM. I can't do it without giving me a real machine. Benevolent brothers are welcome to swipe a 4.1 machine and lend it to me for research.
(personally, it's not too difficult. Things, I can't help but wonder, they can always be figured out. Brothers, just know the general workflow of FM and MT today.
5 other changes
Other changes include:
Pay great attention to debugging, adding a large number of XXXDump classes. It seems that Google also encountered a lot of problems in its own development. Simple function, who would want to go to dump?
Add AudioWatchdog class to monitor AF performance, such as CPU usage, etc.
Six summaries
I remember that when I studied 2.2 AF, AudioFlinger was only 3k lines long, while JB already had 9K lines. Not counting the other auxiliary classes. On the whole, the changing trend of JB is as follows:
To make full use of multi-core resources, the emergence of FastMixer is inevitable. The NBAIO interface is also included. It feels like there's going to be a big challenge to HAL writing.
Adding TimedTrack and SyncEvent will bring a better user experience for synchronization between RTP or multiple player.
Add the interface that the native layer notifies to the java layer.
After reading the above, do you have any further understanding of the changes in the Android Audio system? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.