| 导语上文介绍了Android中提供的Ashmem(匿名内存)。Ashmem以驱动的形式运行在内核。应用层如果需要使用Ashmem,可以直接打开Ashmem驱动并和驱动进行交互,也可以使用Android为我们提供的基于Ashmem驱动的函数库(更推荐使用这个)。本文将介绍Android提供的Ashmem函数库以及共享内存的实现。
1. Ashmem函数库
函数库代码位于:
//函数库代码
ashmem.h
ashmem-dev.c
提供了5个函数以供应用层使用:
int ashmem_create_region(const char *name, size_t size)
int ashmem_set_prot_region(int fd, int prot)
int ashmem_pin_region(int fd, size_t offset, size_t len)
int ashmem_unpin_region(int fd, size_t offset, size_t len)
int ashmem_get_size_region(int fd)
函数库代码很简单,打开Ashmem驱动,并与之进行交互,以ashmem_create_region为例,代码如下:
int ashmem_create_region(const char *name, size_t size)
{
int fd = __ashmem_open();
if (name) {
char buf[ASHMEM_NAME_LEN] = {0};
strlcpy(buf, name, sizeof(buf));
ret = TEMP_FAILURE_RETRY(ioctl(fd, ASHMEM_SET_NAME, buf));
if (ret < 0) {
goto error;
}
}
ret = TEMP_FAILURE_RETRY(ioctl(fd, ASHMEM_SET_SIZE, size));
if (ret < 0) {
goto error;
}
return fd;
}
分为三步:
- 调用__ashmem_open打开驱动
- 调用ioctl(fd, ASHMEM_SET_NAME, buf)设置匿名内存名称
- 调用ioctl(fd, ASHMEM_SET_SIZE, size)设置匿名内存大小
其中,__ashmem_open实现如下:
static int __ashmem_open()
{
int fd;
pthread_mutex_lock(&__ashmem_lock);
fd = __ashmem_open_locked();
pthread_mutex_unlock(&__ashmem_lock);
return fd;
}
ashmem_open调用ashmem_open_locked,代码如下:
static int __ashmem_open_locked()
{
//ASHMEM_DEVICE为/dev/ashmem,打开Ashmem设备
int fd = TEMP_FAILURE_RETRY(open(ASHMEM_DEVICE, O_RDWR));
//判断设备类型
if (!S_ISCHR(st.st_mode) || !st.st_rdev) {
return -1;
}
return fd;
}
__ashmem_open_locked的作用是打开Ashmem,同时加了一些检测逻辑。
再看下函数库提供的pin/unpin的实现,ashmem_pin_region代码如下:
int ashmem_pin_region(int fd, size_t offset, size_t len)
{
struct ashmem_pin pin = { offset, len };
int ret = __ashmem_is_ashmem(fd);
if (ret < 0) {
return ret;
}
return TEMP_FAILURE_RETRY(ioctl(fd, ASHMEM_PIN, &pin));
}
逻辑比较简单,直接调用ioctl(fd, ASHMEM_PIN, &pin)实现pin操作。函数库其他函数都是类似,这里不再继续分析。
2. 共享内存的实现
共享内存的原理,大致是多个进程映射同一块内存,这样其中一个进程对这块内存读写时,另外的进程就能感知到,从而实现了共享内存的目的。大概是下面的样步骤:
- 在某处申请一块内存区域mem,获取与之对应的fd
- 进程A通过mmap将fd映射至本进程某块内存区域
- 进程B通过mmap将fd映射至本进程某块内存区域
- 进程A、进程B对于映射至本进程的内存区域的读写,对方进程均可以看到,因此实现了进程间共享内存
从内存映射角度看,大概是下面这样子:
Android中由MemoryHeapBase和MemoryBase共同实现了共享内存。接下来将结合Android音频系统中的AudioTrack来说明。 相关代码位于:
//共享内存代码 IMemory.h MemoryHeapBase.h MemoryBase.h Memory.cpp MemoryHeapBase.cpp MemoryBase.cpp //Binder驱动代码 binder.c binder.h //AudioTrack相关代码 AudioTrack.h AudioFlinger.h IAudioFlinger.h MemoryDealer.h AudioTrack.cpp AudioFlinger.cpp IAudioFlinger.cpp MemoryDealer.cpp
AudioTrack创建代码如下:
AudioTrack::createTrack函数用来创建AudioTrack,代码如下:
status_t AudioTrack::createTrack(
int streamType,
uint32_t sampleRate,
int format,
int channelCount,
int frameCount,
uint32_t flags,
const sp<IMemory>& sharedBuffer,
audio_io_handle_t output)
{
//省略
sp<IAudioTrack> track = audioFlinger->createTrack(getpid(),
streamType,
sampleRate,
format,
channelCount,
frameCount,
((uint16_t)flags) << 16,
sharedBuffer,
output,
&status);
sp<IMemory> cblk = track->getCblk();
mCblkMemory = cblk;
mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());
//省略
return NO_ERROR;
}
主要分为三步:
- 调用AudioFlinger服务创建IAudioTrack
- 调用AudioTrack的getCblk函数获取IMemory
- 调用IMemory的pointer函数
接下来,将通过分析这三个步骤来了解共享内存的实现。
2.1 创建AudioTrack
由于AudioFlinger在system_server进程,createTrack函数调用将通过Binder实现远程调用,调用流程如下(忽略Binder内部实现):
audioFlinger.createTrack()(audioFlinger实际类型是BpAudioFlinger) -> Binder驱动 -> BnAudioFlinger.onTransact() -> AudioFlinger.createTrack() -> AudioFlinger::PlaybackThread.createTrack_l() -> new AudioFlinger::PlaybackThread::Track::Track -> new AudioFlinger::ThreadBase::TrackBase::TrackBase -> AudioFlinger::Client.heap() -> new MemoryHeapBase -> new Allocation
与共享内存相关的分别是AudioFlinger::Client构造函数、AudioFlinger::ThreadBase::TrackBase::TrackBase构造函数和AudioFlinger::Client.heap()以及new Allocation,接下来一一分析:
2.1.1 AudioFlinger::Client构造函数
AudioFlinger为每个客户端进程创建一个AudioFlinger::Client,以进程pid作为key,创建AudioTrack需要关联到具体的AudioFlinger::Client,代码如下:
sp<IAudioTrack> AudioFlinger::createTrack(
pid_t pid,
int streamType,
uint32_t sampleRate,
int format,
int channelCount,
int frameCount,
uint32_t flags,
const sp<IMemory>& sharedBuffer,
int output,
status_t *status)
{
//省略无关代码
wclient = mClients.valueFor(pid);
if (wclient != NULL) {
client = wclient.promote();
} else {
client = new Client(this, pid);
mClients.add(pid, client);
}
// AudioTrack关联client
track = thread->createTrack_l(client, streamType, sampleRate, format,
channelCount, frameCount, sharedBuffer, &lStatus);
}
构造函数如下:
AudioFlinger::Client::Client(const sp<AudioFlinger>& audioFlinger, pid_t pid)
: RefBase(),
mAudioFlinger(audioFlinger),
mMemoryDealer(new MemoryDealer(1024*1024, "AudioFlinger::Client")),
mPid(pid)
{
// 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
}
如上所示,在构造AudioFlinger::Client时,创建了MemoryDealer,第一个参数为1024*1024。 MemoryDealer的构造函数代码如下:
MemoryDealer::MemoryDealer(size_t size, const char* name)
: mHeap(new MemoryHeapBase(size, 0, name)),
mAllocator(new SimpleBestFitAllocator(size))
{
}
MemoryDealer的构造函数创建了两个对象,分别是MemoryHeapBase和SimpleBestFitAllocator。 MemoryHeapBase构造函数如下:
MemoryHeapBase::MemoryHeapBase(size_t size, uint32_t flags, char const * name)
: mFD(-1), mSize(0), mBase(MAP_FAILED), mFlags(flags),
mDevice(0), mNeedUnmap(false)
{
const size_t pagesize = getpagesize();
size = ((size + pagesize-1) & ~(pagesize-1));
int fd = ashmem_create_region(name == NULL ? "MemoryHeapBase" : name, size);
LOGE_IF(fd<0, "error creating ashmem region: %s", strerror(errno));
if (fd >= 0) {
if (mapfd(fd, size) == NO_ERROR) {
if (flags & READ_ONLY) {
ashmem_set_prot_region(fd, PROT_READ);
}
}
}
}
先调用Ashmem提供的函数库函数ashmem_create_region()完成匿名内存的创建,接着调用mapfd将匿名内存映射至本进程(这里的本进程是AudioFlinger所在的system_server进程)。至此完成了前文实现共享内存的步骤1和步骤2。 SimpleBestFitAllocator负责记录MemoryHeapBase的分块(已经分配了多少块),辅助完成Allocation(继承自MemoryBase)的构造。
SimpleBestFitAllocator构造函数代码如下:
SimpleBestFitAllocator::SimpleBestFitAllocator(size_t size)
{
size_t pagesize = getpagesize();
mHeapSize = ((size + pagesize-1) & ~(pagesize-1));
chunk_t* node = new chunk_t(0, mHeapSize / kMemoryAlign);
mList.insertHead(node);
}
创建一个chunk_t,插入mList,chunk_t大小是size/kMemoryAlign。后续随着AudioTrack的创建,会把chunk_t拆分成更小的chunk_t。
2.1.2 AudioFlinger::ThreadBase::TrackBase::TrackBase构造函数
AudioFlinger::ThreadBase::TrackBase构造函数代码如下:
AudioFlinger::ThreadBase::TrackBase::TrackBase(
const wp<ThreadBase>& thread,
const sp<Client>& client,
uint32_t sampleRate,
int format,
int channelCount,
int frameCount,
uint32_t flags,
const sp<IMemory>& sharedBuffer)
: RefBase(),
mThread(thread),
mClient(client),
mCblk(0),
mFrameCount(0),
mState(IDLE),
mClientTid(-1),
mFormat(format),
mFlags(flags & ~SYSTEM_FLAGS_MASK)
{
size_t size = sizeof(audio_track_cblk_t);
size_t bufferSize = frameCount*channelCount*sizeof(int16_t);
if (sharedBuffer == 0) {
size += bufferSize;
}
if (client != NULL) {
mCblkMemory = client->heap()->allocate(size);
if (mCblkMemory != 0) {
mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
if (mCblk) { // construct the shared structure in-place.
new(mCblk) audio_track_cblk_t();
//各种赋值
}
} else {
//内存不足情况 省略
return;
}
} else {
//省略
}
}
}
如上所示,构造函数调用client->heap()->allocate(size),返回值是mCblkMemory,返回值类型是IMemory。client->heap()->allocate(size)最终实现代码如下:
sp<IMemory> MemoryDealer::allocate(size_t size)
{
sp<IMemory> memory;
const ssize_t offset = allocator()->allocate(size);
if (offset >= 0) {
memory = new Allocation(this, heap(), offset, size);
}
return memory;
}
首先调用allocator()->allocate(size)获取到一个偏移值,然后再由这个偏移值构造出来了一个Allocation。Allocation继承自类MemoryBase。 allocate函数最终调用的是SimpleBestFitAllocator.alloc函数,代码如下:
ssize_t SimpleBestFitAllocator::alloc(size_t size, uint32_t flags)
{
size = (size + kMemoryAlign-1) / kMemoryAlign;
//找到大小大于size的free_chunk
if (free_chunk) {
const size_t free_size = free_chunk->size;
free_chunk->free = 0;
free_chunk->size = size;
if (free_size > size) {
int extra = 0;
if (flags & PAGE_ALIGNED)
extra = ( -free_chunk->start & ((pagesize/kMemoryAlign)-1) ) ;
if (extra) {
chunk_t* split = new chunk_t(free_chunk->start, extra);
free_chunk->start += extra;
mList.insertBefore(free_chunk, split);
}
LOGE_IF((flags&PAGE_ALIGNED) &&
((free_chunk->start*kMemoryAlign)&(pagesize-1)),
"PAGE_ALIGNED requested, but page is not aligned!!!");
const ssize_t tail_free = free_size - (size+extra);
if (tail_free > 0) {
chunk_t* split = new chunk_t(
free_chunk->start + free_chunk->size, tail_free);
mList.insertAfter(free_chunk, split);
}
}
return (free_chunk->start)*kMemoryAlign;
}
return NO_MEMORY;
}
如下,计算合适的chunk,然后返回偏移地址。上层业务再使用该偏移值构造Allocation。 因为有SimpleBestFitAllocator对于MemoryHeapBase中已分配内存的记录和管理,如果某个进程有多个AudioTrack被创建,该进程AudioTrack对应的的MemoryHeapBase内存布局将如下图:
2.2 获取IMemory
因为在客户端进程获取的IAudioTrack实际是BpAudioTrack,调用IAduioTrck的getCblk函数,实际执行的是BpAudioTrack的getCblk函数,执行路径是:
BpAudioTrack.getCblk() -> Binder驱动 -> BnAudioTrack.onTransact() -> PlaybackThread::Track.getCblk() -> AudioFlinger::ThreadBase::TrackBase::getCblk() ->
AudioFlinger::ThreadBase::TrackBase::getCblk代码如下:
sp<IMemory> AudioFlinger::ThreadBase::TrackBase::getCblk() const
{
return mCblkMemory;
}
这里的mCblkMemory正是之前在AudioFlinger::ThreadBase::TrackBase构造函数构造的,实际类型是Allocation。 我们知道,在客户端调用服务端某个函数时,一定会执行到服务端的Bn*的onTransact函数,在该函数中完成返回值的序列化以回传到客户端,看下BnAudioTrack.onTransact中getCblk函数的实现:
status_t BnAudioTrack::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case GET_CBLK: {
CHECK_INTERFACE(IAudioTrack, data, reply);
reply->writeStrongBinder(getCblk()->asBinder());
return NO_ERROR;
} break;
}
}
逻辑比较简单,调用reply->writeStrongBinder完成序列化。具体的逻辑需要参考Binder的实现,这里先不展开。Binder调用完成后,客户端会构造一个BpMemory对象作为对服务端mCblkMemory的代理。 不知道读者注意到没有?到目前为止,仅仅是服务端完成了匿名内存的申请和映射,客户端并没有完成映射。带着这个问题,让我们一起看一下客户端是如何完成内存映射的。
2.3 客户端内存映射
创建AudioTrack的最后一个步骤是调用IMemory的pointer函数。由Binder知识可知,IMemory的实际类型是BpMemory,BpMemory没有重写pointer函数,调用流程是:
BpMemory.pointer() -> IMemory.pointer() -> BpMemory.getMemory() -> Binder驱动 -> BnMemory.onTransact() ->
看下BnMemory函数onTransact的实现,代码如下:
status_t BnMemory::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case GET_MEMORY: {
CHECK_INTERFACE(IMemory, data, reply);
ssize_t offset;
size_t size;
reply->writeStrongBinder( getMemory(&offset, &size)->asBinder() );
reply->writeInt32(offset);
reply->writeInt32(size);
return NO_ERROR;
} break;
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
这里getMemory函数实现在MemoryBase类,代码如下:
sp<IMemoryHeap> MemoryBase::getMemory(ssize_t* offset, size_t* size) const
{
if (offset) *offset = mOffset;
if (size) *size = mSize;
return mHeap;
}
mHeap是在上文的Allocation构造中传入,由AudioFlinger::Client构造函数中创建的,实际类型是MemoryHeapBase。 另外还将offset和size变量序列化传回到了客户端进程。 折回去看下BpMemory函数getMemory的实现:
sp<IMemoryHeap> BpMemory::getMemory(ssize_t* offset, size_t* size) const
{
if (mHeap == 0) {
Parcel data, reply;
data.writeInterfaceToken(IMemory::getInterfaceDescriptor());
if (remote()->transact(GET_MEMORY, data, &reply) == NO_ERROR) {
sp<IBinder> heap = reply.readStrongBinder();
ssize_t o = reply.readInt32();
size_t s = reply.readInt32();
if (heap != 0) {
mHeap = interface_cast<IMemoryHeap>(heap);
if (mHeap != 0) {
mOffset = o;
mSize = s;
}
}
}
}
if (offset) *offset = mOffset;
if (size) *size = mSize;
return mHeap;
}
先调用interface_cast<IMemoryHeap>(heap),这里实际创建的是BpMemoryHeap,同时保存了服务端传回的offset和size变量供后续使用(MemoryBase的作用是分块管理MemoryHeapBase,所以这里自然要知道offset和size大小)。 再回头看下IMemory函数pointer的实现,代码如下:
void* IMemory::pointer() const {
ssize_t offset;
sp<IMemoryHeap> heap = getMemory(&offset);
void* const base = heap!=0 ? heap->base() : MAP_FAILED;
if (base == MAP_FAILED)
return 0;
return static_cast<char*>(base) + offset;
}
先调用getMemory函数,实际获取的是BpMemoryHeap。接着调用其base函数,看下实现感觉快要到内存映射的地方了!代码如下:
void* BpMemoryHeap::getBase() const {
assertMapped();
return mBase;
}
调用了函数assertMapped,嗯名字很直接,判断释放已经映射过,继续看下实现:
void BpMemoryHeap::assertMapped() const
{
if (mHeapId == -1) {
sp<IBinder> binder(const_cast<BpMemoryHeap*>(this)->asBinder());
sp<BpMemoryHeap> heap(static_cast<BpMemoryHeap*>(find_heap(binder).get()));
heap->assertReallyMapped();
if (heap->mBase != MAP_FAILED) {
Mutex::Autolock _l(mLock);
if (mHeapId == -1) {
mBase = heap->mBase;
mSize = heap->mSize;
android_atomic_write( dup( heap->mHeapId ), &mHeapId );
}
} else {
// something went wrong
free_heap(binder);
}
}
}
mHeapId初始化为-1,因此第一次会进入if分支,find_heap只是加了缓存机制,先不看,最后调用了heap的函数assertReallyMapped,调用完成后,保存mBase和mSize的值,看下assertReallyMapped实现:
void BpMemoryHeap::assertReallyMapped() const
{
if (mHeapId == -1) {
//省略
Parcel data, reply;
data.writeInterfaceToken(IMemoryHeap::getInterfaceDescriptor());
status_t err = remote()->transact(HEAP_ID, data, &reply);
int parcel_fd = reply.readFileDescriptor();
ssize_t size = reply.readInt32();
uint32_t flags = reply.readInt32();
int fd = dup( parcel_fd );
int access = PROT_READ;
if (!(flags & READ_ONLY)) {
access |= PROT_WRITE;
}
Mutex::Autolock _l(mLock);
if (mHeapId == -1) {
mRealHeap = true;
mBase = mmap(0, size, access, MAP_SHARED, fd, 0);
if (mBase == MAP_FAILED) {
LOGE("cannot map BpMemoryHeap (binder=%p), size=%ld, fd=%d (%s)",
asBinder().get(), size, fd, strerror(errno));
close(fd);
} else {
mSize = size;
mFlags = flags;
android_atomic_write(fd, &mHeapId);
}
}
}
}
先调用remote()->transact(HEAP_ID,),然后调用dup生产一个fd,最后通过mmap完成内存映射? 由Bider机制可知,remote()实际返回的是BpBinde对象,rremote()->transact(HEAP_ID,)最终会执行到BnMemoryBase的transact函数,看下实现:
status_t BnMemoryHeap::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case HEAP_ID: {
CHECK_INTERFACE(IMemoryHeap, data, reply);
reply->writeFileDescriptor(getHeapID());
reply->writeInt32(getSize());
reply->writeInt32(getFlags());
return NO_ERROR;
} break;
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
写入了文件fd、size和flag参数。这样客户端进程获取到该fd后,再mmap一下,就完成了AudioFlinger中创建的那块匿名内存的映射,真是的这样吗?Linux中文件fd只是一个数组索引(整型数值)而已,且仅本进程有效,那么这里是如何做到一个进程传递给进程成的文件fd到另外一个进程,另外的进程还能使用该文件fd呢?谜底在于Binder驱动,Binder驱动对于文件fd做了特殊处理,代码如下:
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
switch (fp->type) {
case BINDER_TYPE_FD: {
int target_fd;
struct file *file;
if (reply) {
if (!(in_reply_to->flags & TF_ACCEPT_FDS)) {
binder_user_error("binder: %d:%d got reply with fd, %ld, but target does not allow fds\n",
proc->pid, thread->pid, fp->handle);
return_error = BR_FAILED_REPLY;
goto err_fd_not_allowed;
}
} else if (!target_node->accept_fds) {
binder_user_error("binder: %d:%d got transaction with fd, %ld, but target does not allow fds\n",
proc->pid, thread->pid, fp->handle);
return_error = BR_FAILED_REPLY;
goto err_fd_not_allowed;
}
file = fget(fp->handle);
if (file == NULL) {
binder_user_error("binder: %d:%d got transaction with invalid fd, %ld\n",
proc->pid, thread->pid, fp->handle);
return_error = BR_FAILED_REPLY;
goto err_fget_failed;
}
if (security_binder_transfer_file(proc->tsk, target_proc->tsk, file) < 0) {
fput(file);
return_error = BR_FAILED_REPLY;
goto err_get_unused_fd_failed;
}
target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
if (target_fd < 0) {
fput(file);
return_error = BR_FAILED_REPLY;
goto err_get_unused_fd_failed;
}
task_fd_install(target_proc, target_fd, file);
trace_binder_transaction_fd(t, fp->handle, target_fd);
binder_debug(BINDER_DEBUG_TRANSACTION,
" fd %ld -> %d\n", fp->handle, target_fd);
/* TODO: fput? */
fp->handle = target_fd;
} break;
}
先是调用task_get_unused_fd_flags在客户端进程获取了一个未用的文件fd,再调用task_fd_install再客户端进程安装了该文件fd。最后传递给客户端进程的是新安装的fd。所以最终传回的文件fd并非服务端进程中的文件fd,而是Binder驱动在客户端进程中重新创建的文件fd,只是对于客户端进程来说,这一切都是透明的,仿佛直接获取到了服务端进程的文件fd。
至此,前文中实现共享内存的3个步骤均已经完成。之后这两个进程就可以在本进程中相应内存映射区域自由读写内容了,而对方进程立马能“看到”最新的数据。