Android教程網
  1. 首頁
  2. Android 技術
  3. Android 手機
  4. Android 系統教程
  5. Android 游戲
 Android教程網 >> Android技術 >> 關於Android編程 >> Android系統進程間通信(IPC)機制Binder中的Server啟動過程源代碼分析

Android系統進程間通信(IPC)機制Binder中的Server啟動過程源代碼分析

編輯:關於Android編程

        在前面一篇文章Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路中,介紹了在Android系統中Binder進程間通信機制中的Server角色是如何獲得Service Manager遠程接口的,即defaultServiceManager函數的實現。Server獲得了Service Manager遠程接口之後,就要把自己的Service添加到Service Manager中去,然後把自己啟動起來,等待Client的請求。本文將通過分析源代碼了解Server的啟動過程是怎麼樣的。

        本文通過一個具體的例子來說明Binder機制中Server的啟動過程。我們知道,在Android系統中,提供了多媒體播放的功能,這個功能是以服務的形式來提供的。這裡,我們就通過分析MediaPlayerService的實現來了解Media Server的啟動過程。

        首先,看一下MediaPlayerService的類圖,以便我們理解下面要描述的內容。

        我們將要介紹的主角MediaPlayerService繼承於BnMediaPlayerService類,熟悉Binder機制的同學應該知道BnMediaPlayerService是一個Binder Native類,用來處理Client請求的。BnMediaPlayerService繼承於BnInterface<IMediaPlayerService>類,BnInterface是一個模板類,它定義在frameworks/base/include/binder/IInterface.h文件中:

template<typename INTERFACE> 
class BnInterface : public INTERFACE, public BBinder 
{ 
public: 
 virtual sp<IInterface>  queryLocalInterface(const String16& _descriptor); 
 virtual const String16&  getInterfaceDescriptor() const; 
 
protected: 
 virtual IBinder*   onAsBinder(); 
}; 

       這裡可以看出,BnMediaPlayerService實際是繼承了IMediaPlayerService和BBinder類。IMediaPlayerService和BBinder類又分別繼承了IInterface和IBinder類,IInterface和IBinder類又同時繼承了RefBase類。

       實際上,BnMediaPlayerService並不是直接接收到Client處發送過來的請求,而是使用了IPCThreadState接收Client處發送過來的請求,而IPCThreadState又借助了ProcessState類來與Binder驅動程序交互。有關IPCThreadState和ProcessState的關系,可以參考上一篇文章Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路,接下來也會有相應的描述。IPCThreadState接收到了Client處的請求後,就會調用BBinder類的transact函數,並傳入相關參數,BBinder類的transact函數最終調用BnMediaPlayerService類的onTransact函數,於是,就開始真正地處理Client的請求了。

      了解了MediaPlayerService類結構之後,就要開始進入到本文的主題了。

      首先,看看MediaPlayerService是如何啟動的。啟動MediaPlayerService的代碼位於frameworks/base/media/mediaserver/main_mediaserver.cpp文件中:

int main(int argc, char** argv) 
{ 
 sp<ProcessState> proc(ProcessState::self()); 
 sp<IServiceManager> sm = defaultServiceManager(); 
 LOGI("ServiceManager: %p", sm.get()); 
 AudioFlinger::instantiate(); 
 MediaPlayerService::instantiate(); 
 CameraService::instantiate(); 
 AudioPolicyService::instantiate(); 
 ProcessState::self()->startThreadPool(); 
 IPCThreadState::self()->joinThreadPool(); 
} 

       這裡我們不關注AudioFlinger和CameraService相關的代碼。
       先看下面這句代碼:

                       sp<ProcessState> proc(ProcessState::self());  

       這句代碼的作用是通過ProcessState::self()調用創建一個ProcessState實例。ProcessState::self()是ProcessState類的一個靜態成員變量,定義在frameworks/base/libs/binder/ProcessState.cpp文件中:

sp<ProcessState> ProcessState::self() 
{ 
 if (gProcess != NULL) return gProcess; 
  
 AutoMutex _l(gProcessMutex); 
 if (gProcess == NULL) gProcess = new ProcessState; 
 return gProcess; 
} 

       這裡可以看出,這個函數作用是返回一個全局唯一的ProcessState實例gProcess。全局唯一實例變量gProcess定義在frameworks/base/libs/binder/Static.cpp文件中:

                        Mutex gProcessMutex; 
                        sp<ProcessState> gProcess;  

       再來看ProcessState的構造函數:

ProcessState::ProcessState() 
 : mDriverFD(open_driver()) 
 , mVMStart(MAP_FAILED) 
 , mManagesContexts(false) 
 , mBinderContextCheckFunc(NULL) 
 , mBinderContextUserData(NULL) 
 , mThreadPoolStarted(false) 
 , mThreadPoolSeq(1) 
{ 
 if (mDriverFD >= 0) { 
  // XXX Ideally, there should be a specific define for whether we 
  // have mmap (or whether we could possibly have the kernel module 
  // availabla). 
#if !defined(HAVE_WIN32_IPC) 
  // mmap the binder, providing a chunk of virtual address space to receive transactions. 
  mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); 
  if (mVMStart == MAP_FAILED) { 
   // *sigh* 
   LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n"); 
   close(mDriverFD); 
   mDriverFD = -1; 
  } 
#else 
  mDriverFD = -1; 
#endif 
 } 
 if (mDriverFD < 0) { 
  // Need to run without the driver, starting our own thread pool. 
 } 
} 

        這個函數有兩個關鍵地方,一是通過open_driver函數打開Binder設備文件/dev/binder,並將打開設備文件描述符保存在成員變量mDriverFD中;二是通過mmap來把設備文件/dev/binder映射到內存中。

        先看open_driver函數的實現,這個函數同樣位於frameworks/base/libs/binder/ProcessState.cpp文件中:

static int open_driver() 
{ 
 if (gSingleProcess) { 
  return -1; 
 } 
 
 int fd = open("/dev/binder", O_RDWR); 
 if (fd >= 0) { 
  fcntl(fd, F_SETFD, FD_CLOEXEC); 
  int vers; 
#if defined(HAVE_ANDROID_OS) 
  status_t result = ioctl(fd, BINDER_VERSION, &vers); 
#else 
  status_t result = -1; 
  errno = EPERM; 
#endif 
  if (result == -1) { 
   LOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); 
   close(fd); 
   fd = -1; 
  } 
  if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { 
   LOGE("Binder driver protocol does not match user space protocol!"); 
   close(fd); 
   fd = -1; 
  } 
#if defined(HAVE_ANDROID_OS) 
  size_t maxThreads = 15; 
  result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); 
  if (result == -1) { 
   LOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); 
  } 
#endif 
   
 } else { 
  LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno)); 
 } 
 return fd; 
} 

        這個函數的作用主要是通過open文件操作函數來打開/dev/binder設備文件,然後再調用ioctl文件控制函數來分別執行BINDER_VERSION和BINDER_SET_MAX_THREADS兩個命令來和Binder驅動程序進行交互,前者用於獲得當前Binder驅動程序的版本號,後者用於通知Binder驅動程序,MediaPlayerService最多可同時啟動15個線程來處理Client端的請求。

        open在Binder驅動程序中的具體實現,請參考前面一篇文章淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路,這裡不再重復描述。打開/dev/binder設備文件後,Binder驅動程序就為MediaPlayerService進程創建了一個struct binder_proc結構體實例來維護MediaPlayerService進程上下文相關信息。

        我們來看一下ioctl文件操作函數執行BINDER_VERSION命令的過程:

                        status_t result = ioctl(fd, BINDER_VERSION, &vers);  

        這個函數調用最終進入到Binder驅動程序的binder_ioctl函數中,我們只關注BINDER_VERSION相關的部分邏輯:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 
{ 
 int ret; 
 struct binder_proc *proc = filp->private_data; 
 struct binder_thread *thread; 
 unsigned int size = _IOC_SIZE(cmd); 
 void __user *ubuf = (void __user *)arg; 
 
 /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/ 
 
 ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); 
 if (ret) 
  return ret; 
 
 mutex_lock(&binder_lock); 
 thread = binder_get_thread(proc); 
 if (thread == NULL) { 
  ret = -ENOMEM; 
  goto err; 
 } 
 
 switch (cmd) { 
 ...... 
 case BINDER_VERSION: 
  if (size != sizeof(struct binder_version)) { 
   ret = -EINVAL; 
   goto err; 
  } 
  if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) { 
   ret = -EINVAL; 
   goto err; 
  } 
  break; 
 ...... 
 } 
 ret = 0; 
err: 
  ...... 
 return ret; 
} 

        很簡單,只是將BINDER_CURRENT_PROTOCOL_VERSION寫入到傳入的參數arg指向的用戶緩沖區中去就返回了。BINDER_CURRENT_PROTOCOL_VERSION是一個宏,定義在kernel/common/drivers/staging/android/binder.h文件中:

                     /* This is the current protocol version. */ 
             #define BINDER_CURRENT_PROTOCOL_VERSION 7  

       這裡為什麼要把ubuf轉換成struct binder_version之後,再通過其protocol_version成員變量再來寫入呢,轉了一圈,最終內容還是寫入到ubuf中。我們看一下struct binder_version的定義就會明白,同樣是在kernel/common/drivers/staging/android/binder.h文件中:

/* Use with BINDER_VERSION, driver fills in fields. */ 
struct binder_version { 
 /* driver protocol version -- increment with incompatible change */ 
 signed long protocol_version; 
}; 

         從注釋中可以看出來,這裡是考慮到兼容性,因為以後很有可能不是用signed long來表示版本號。

        這裡有一個重要的地方要注意的是,由於這裡是打開設備文件/dev/binder之後,第一次進入到binder_ioctl函數,因此,這裡調用binder_get_thread的時候,就會為當前線程創建一個struct binder_thread結構體變量來維護線程上下文信息,具體可以參考淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路一文。

        接著我們再來看一下ioctl文件操作函數執行BINDER_SET_MAX_THREADS命令的過程:

                   result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);  

        這個函數調用最終進入到Binder驅動程序的binder_ioctl函數中,我們只關注BINDER_SET_MAX_THREADS相關的部分邏輯:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 
{ 
 int ret; 
 struct binder_proc *proc = filp->private_data; 
 struct binder_thread *thread; 
 unsigned int size = _IOC_SIZE(cmd); 
 void __user *ubuf = (void __user *)arg; 
 
 /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/ 
 
 ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); 
 if (ret) 
  return ret; 
 
 mutex_lock(&binder_lock); 
 thread = binder_get_thread(proc); 
 if (thread == NULL) { 
  ret = -ENOMEM; 
  goto err; 
 } 
 
 switch (cmd) { 
 ...... 
 case BINDER_SET_MAX_THREADS: 
  if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) { 
   ret = -EINVAL; 
   goto err; 
  } 
  break; 
 ...... 
 } 
 ret = 0; 
err: 
 ...... 
 return ret; 
} 

        這裡實現也是非常簡單,只是簡單地把用戶傳進來的參數保存在proc->max_threads中就完畢了。注意,這裡再調用binder_get_thread函數的時候,就可以在proc->threads中找到當前線程對應的struct binder_thread結構了,因為前面已經創建好並保存在proc->threads紅黑樹中。

        回到ProcessState的構造函數中,這裡還通過mmap函數來把設備文件/dev/binder映射到內存中,這個函數在淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路一文也已經有詳細介紹,這裡不再重復描述。宏BINDER_VM_SIZE就定義在ProcessState.cpp文件中:

             #define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))  

        mmap函數調用完成之後,Binder驅動程序就為當前進程預留了BINDER_VM_SIZE大小的內存空間了。

        這樣,ProcessState全局唯一變量gProcess就創建完畢了,回到frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函數,下一步是調用defaultServiceManager函數來獲得Service Manager的遠程接口,這個已經在上一篇文章淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路有詳細描述,讀者可以回過頭去參考一下。

        再接下來,就進入到MediaPlayerService::instantiate函數把MediaPlayerService添加到Service Manger中去了。這個函數定義在frameworks/base/media/libmediaplayerservice/MediaPlayerService.cpp文件中:

void MediaPlayerService::instantiate() { 
 defaultServiceManager()->addService( 
   String16("media.player"), new MediaPlayerService()); 
} 

        我們重點看一下IServiceManger::addService的過程,這有助於我們加深對Binder機制的理解。

        在上一篇文章淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路中說到,defaultServiceManager返回的實際是一個BpServiceManger類實例,因此,我們看一下BpServiceManger::addService的實現,這個函數實現在frameworks/base/libs/binder/IServiceManager.cpp文件中:

class BpServiceManager : public BpInterface<IServiceManager> 
{ 
public: 
 BpServiceManager(const sp<IBinder>& impl) 
  : BpInterface<IServiceManager>(impl) 
 { 
 } 
 
 ...... 
 
 virtual status_t addService(const String16& name, const sp<IBinder>& service) 
 { 
  Parcel data, reply; 
  data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); 
  data.writeString16(name); 
  data.writeStrongBinder(service); 
  status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); 
  return err == NO_ERROR ? reply.readExceptionCode() 
 } 
 
 ...... 
 
}; 

         這裡的Parcel類是用來於序列化進程間通信數據用的。
         先來看這一句的調用:

           data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());  

         IServiceManager::getInterfaceDescriptor()返回來的是一個字符串,即"android.os.IServiceManager",具體可以參考IServiceManger的實現。我們看一下Parcel::writeInterfaceToken的實現,位於frameworks/base/libs/binder/Parcel.cpp文件中:

// Write RPC headers. (previously just the interface token) 
status_t Parcel::writeInterfaceToken(const String16& interface) 
{ 
 writeInt32(IPCThreadState::self()->getStrictModePolicy() | 
    STRICT_MODE_PENALTY_GATHER); 
 // currently the interface identification token is just its name as a string 
 return writeString16(interface); 
} 

         它的作用是寫入一個整數和一個字符串到Parcel中去。

         再來看下面的調用:

                    data.writeString16(name);  

        這裡又是寫入一個字符串到Parcel中去,這裡的name即是上面傳進來的“media.player”字符串。
        往下看:

               data.writeStrongBinder(service);  

        這裡定入一個Binder對象到Parcel去。我們重點看一下這個函數的實現,因為它涉及到進程間傳輸Binder實體的問題,比較復雜,需要重點關注,同時,也是理解Binder機制的一個重點所在。注意,這裡的service參數是一個MediaPlayerService對象。

status_t Parcel::writeStrongBinder(const sp<IBinder>& val) 
{ 
 return flatten_binder(ProcessState::self(), val, this); 
} 

        看到flatten_binder函數,是不是似曾相識的感覺?我們在前面一篇文章淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路中,曾經提到在Binder驅動程序中,使用struct flat_binder_object來表示傳輸中的一個binder對象,它的定義如下所示:

/* 
 * This is the flattened representation of a Binder object for transfer 
 * between processes. The 'offsets' supplied as part of a binder transaction 
 * contains offsets into the data where these structures occur. The Binder 
 * driver takes care of re-writing the structure type and data as it moves 
 * between processes. 
 */ 
struct flat_binder_object { 
 /* 8 bytes for large_flat_header. */ 
 unsigned long  type; 
 unsigned long  flags; 
 
 /* 8 bytes of data. */ 
 union { 
  void  *binder; /* local object */ 
  signed long handle;  /* remote object */ 
 }; 
 
 /* extra data associated with local object */ 
 void   *cookie; 
}; 

        各個成員變量的含義請參考資料Android Binder設計與實現。
        我們進入到flatten_binder函數看看:

status_t flatten_binder(const sp<ProcessState>& proc, 
 const sp<IBinder>& binder, Parcel* out) 
{ 
 flat_binder_object obj; 
  
 obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; 
 if (binder != NULL) { 
  IBinder *local = binder->localBinder(); 
  if (!local) { 
   BpBinder *proxy = binder->remoteBinder(); 
   if (proxy == NULL) { 
    LOGE("null proxy"); 
   } 
   const int32_t handle = proxy ? proxy->handle() : 0; 
   obj.type = BINDER_TYPE_HANDLE; 
   obj.handle = handle; 
   obj.cookie = NULL; 
  } else { 
   obj.type = BINDER_TYPE_BINDER; 
   obj.binder = local->getWeakRefs(); 
   obj.cookie = local; 
  } 
 } else { 
  obj.type = BINDER_TYPE_BINDER; 
  obj.binder = NULL; 
  obj.cookie = NULL; 
 } 
  
 return finish_flatten_binder(binder, obj, out); 
} 

        首先是初始化flat_binder_object的flags域: 

               obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;  

        0x7f表示處理本Binder實體請求數據包的線程的最低優先級,FLAT_BINDER_FLAG_ACCEPTS_FDS表示這個Binder實體可以接受文件描述符,Binder實體在收到文件描述符時,就會在本進程中打開這個文件。

       傳進來的binder即為MediaPlayerService::instantiate函數中new出來的MediaPlayerService實例,因此,不為空。又由於MediaPlayerService繼承自BBinder類,它是一個本地Binder實體,因此binder->localBinder返回一個BBinder指針,而且肯定不為空,於是執行下面語句:

obj.type = BINDER_TYPE_BINDER; 
obj.binder = local->getWeakRefs(); 
obj.cookie = local; 

        設置了flat_binder_obj的其他成員變量,注意,指向這個Binder實體地址的指針local保存在flat_binder_obj的成員變量cookie中。

        函數調用finish_flatten_binder來將這個flat_binder_obj寫入到Parcel中去:

inline static status_t finish_flatten_binder( 
 const sp<IBinder>& binder, const flat_binder_object& flat, Parcel* out) 
{ 
 return out->writeObject(flat, false); 
} 

       Parcel::writeObject的實現如下:

status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData) 
{ 
 const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity; 
 const bool enoughObjects = mObjectsSize < mObjectsCapacity; 
 if (enoughData && enoughObjects) { 
restart_write: 
  *reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val; 
   
  // Need to write meta-data? 
  if (nullMetaData || val.binder != NULL) { 
   mObjects[mObjectsSize] = mDataPos; 
   acquire_object(ProcessState::self(), val, this); 
   mObjectsSize++; 
  } 
   
  // remember if it's a file descriptor 
  if (val.type == BINDER_TYPE_FD) { 
   mHasFds = mFdsKnown = true; 
  } 
 
  return finishWrite(sizeof(flat_binder_object)); 
 } 
 
 if (!enoughData) { 
  const status_t err = growData(sizeof(val)); 
  if (err != NO_ERROR) return err; 
 } 
 if (!enoughObjects) { 
  size_t newSize = ((mObjectsSize+2)*3)/2; 
  size_t* objects = (size_t*)realloc(mObjects, newSize*sizeof(size_t)); 
  if (objects == NULL) return NO_MEMORY; 
  mObjects = objects; 
  mObjectsCapacity = newSize; 
 } 
  
 goto restart_write; 
} 

        這裡除了把flat_binder_obj寫到Parcel裡面之內,還要記錄這個flat_binder_obj在Parcel裡面的偏移位置:

                    mObjects[mObjectsSize] = mDataPos;  

       這裡因為,如果進程間傳輸的數據間帶有Binder對象的時候,Binder驅動程序需要作進一步的處理,以維護各個Binder實體的一致性,下面我們將會看到Binder驅動程序是怎麼處理這些Binder對象的。

       再回到BpServiceManager::addService函數中,調用下面語句:

      status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);  

       回到淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路一文中的類圖中去看一下,這裡的remote成員函數來自於BpRefBase類,它返回一個BpBinder指針。因此,我們繼續進入到BpBinder::transact函數中去看看:

status_t BpBinder::transact( 
 uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 
{ 
 // Once a binder has died, it will never come back to life. 
 if (mAlive) { 
  status_t status = IPCThreadState::self()->transact( 
   mHandle, code, data, reply, flags); 
  if (status == DEAD_OBJECT) mAlive = 0; 
  return status; 
 } 
 
 return DEAD_OBJECT; 
} 

       這裡又調用了IPCThreadState::transact進執行實際的操作。注意,這裡的mHandle為0,code為ADD_SERVICE_TRANSACTION。ADD_SERVICE_TRANSACTION是上面以參數形式傳進來的,那mHandle為什麼是0呢?因為這裡表示的是Service Manager遠程接口,它的句柄值一定是0,具體請參考淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路一文。

       再進入到IPCThreadState::transact函數,看看做了些什麼事情:

status_t IPCThreadState::transact(int32_t handle, 
         uint32_t code, const Parcel& data, 
         Parcel* reply, uint32_t flags) 
{ 
 status_t err = data.errorCheck(); 
 
 flags |= TF_ACCEPT_FDS; 
 
 IF_LOG_TRANSACTIONS() { 
  TextOutput::Bundle _b(alog); 
  alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " 
   << handle << " / code " << TypeCode(code) << ": " 
   << indent << data << dedent << endl; 
 } 
  
 if (err == NO_ERROR) { 
  LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), 
   (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); 
  err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); 
 } 
  
 if (err != NO_ERROR) { 
  if (reply) reply->setError(err); 
  return (mLastError = err); 
 } 
  
 if ((flags & TF_ONE_WAY) == 0) { 
  #if 0 
  if (code == 4) { // relayout 
   LOGI(">>>>>> CALLING transaction 4"); 
  } else { 
   LOGI(">>>>>> CALLING transaction %d", code); 
  } 
  #endif 
  if (reply) { 
   err = waitForResponse(reply); 
  } else { 
   Parcel fakeReply; 
   err = waitForResponse(&fakeReply); 
  } 
  #if 0 
  if (code == 4) { // relayout 
   LOGI("<<<<<< RETURNING transaction 4"); 
  } else { 
   LOGI("<<<<<< RETURNING transaction %d", code); 
  } 
  #endif 
   
  IF_LOG_TRANSACTIONS() { 
   TextOutput::Bundle _b(alog); 
   alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " 
    << handle << ": "; 
   if (reply) alog << indent << *reply << dedent << endl; 
   else alog << "(none requested)" << endl; 
  } 
 } else { 
  err = waitForResponse(NULL, NULL); 
 } 
  
 return err; 
} 

        IPCThreadState::transact函數的參數flags是一個默認值為0的參數,上面沒有傳相應的實參進來,因此,這裡就為0。

        函數首先調用writeTransactionData函數准備好一個struct binder_transaction_data結構體變量,這個是等一下要傳輸給Binder驅動程序的。struct binder_transaction_data的定義我們在淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路一文中有詳細描述,讀者不妨回過去讀一下。這裡為了方便描述,將struct binder_transaction_data的定義再次列出來:

struct binder_transaction_data { 
 /* The first two are only used for bcTRANSACTION and brTRANSACTION, 
  * identifying the target and contents of the transaction. 
  */ 
 union { 
  size_t handle; /* target descriptor of command transaction */ 
  void *ptr; /* target descriptor of return transaction */ 
 } target; 
 void  *cookie; /* target object cookie */ 
 unsigned int code;  /* transaction command */ 
 
 /* General information about the transaction. */ 
 unsigned int flags; 
 pid_t  sender_pid; 
 uid_t  sender_euid; 
 size_t  data_size; /* number of bytes of data */ 
 size_t  offsets_size; /* number of bytes of offsets */ 
 
 /* If this transaction is inline, the data immediately 
  * follows here; otherwise, it ends with a pointer to 
  * the data buffer. 
  */ 
 union { 
  struct { 
   /* transaction data */ 
   const void *buffer; 
   /* offsets from buffer to flat_binder_object structs */ 
   const void *offsets; 
  } ptr; 
  uint8_t buf[8]; 
 } data; 
}; 
  

         writeTransactionData函數的實現如下:

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, 
 int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) 
{ 
 binder_transaction_data tr; 
 
 tr.target.handle = handle; 
 tr.code = code; 
 tr.flags = binderFlags; 
  
 const status_t err = data.errorCheck(); 
 if (err == NO_ERROR) { 
  tr.data_size = data.ipcDataSize(); 
  tr.data.ptr.buffer = data.ipcData(); 
  tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); 
  tr.data.ptr.offsets = data.ipcObjects(); 
 } else if (statusBuffer) { 
  tr.flags |= TF_STATUS_CODE; 
  *statusBuffer = err; 
  tr.data_size = sizeof(status_t); 
  tr.data.ptr.buffer = statusBuffer; 
  tr.offsets_size = 0; 
  tr.data.ptr.offsets = NULL; 
 } else { 
  return (mLastError = err); 
 } 
  
 mOut.writeInt32(cmd); 
 mOut.write(&tr, sizeof(tr)); 
  
 return NO_ERROR; 
} 


  注意,這裡的cmd為BC_TRANSACTION。 這個函數很簡單,在這個場景下,就是執行下面語句來初始化本地變量tr:

tr.data_size = data.ipcDataSize(); 
tr.data.ptr.buffer = data.ipcData(); 
tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); 
tr.data.ptr.offsets = data.ipcObjects(); 

       回憶一下上面的內容,寫入到tr.data.ptr.buffer的內容相當於下面的內容:

writeInt32(IPCThreadState::self()->getStrictModePolicy() | 
    STRICT_MODE_PENALTY_GATHER); 
writeString16("android.os.IServiceManager"); 
writeString16("media.player"); 
writeStrongBinder(new MediaPlayerService()); 

      其中包含了一個Binder實體MediaPlayerService,因此需要設置tr.offsets_size就為1,tr.data.ptr.offsets就指向了這個MediaPlayerService的地址在tr.data.ptr.buffer中的偏移量。最後,將tr的內容保存在IPCThreadState的成員變量mOut中。

       回到IPCThreadState::transact函數中,接下去看,(flags & TF_ONE_WAY) == 0為true,並且reply不為空,所以最終進入到waitForResponse(reply)這條路徑來。我們看一下waitForResponse函數的實現:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) 
{ 
 int32_t cmd; 
 int32_t err; 
 
 while (1) { 
  if ((err=talkWithDriver()) < NO_ERROR) break; 
  err = mIn.errorCheck(); 
  if (err < NO_ERROR) break; 
  if (mIn.dataAvail() == 0) continue; 
   
  cmd = mIn.readInt32(); 
   
  IF_LOG_COMMANDS() { 
   alog << "Processing waitForResponse Command: " 
    << getReturnString(cmd) << endl; 
  } 
 
  switch (cmd) { 
  case BR_TRANSACTION_COMPLETE: 
   if (!reply && !acquireResult) goto finish; 
   break; 
   
  case BR_DEAD_REPLY: 
   err = DEAD_OBJECT; 
   goto finish; 
 
  case BR_FAILED_REPLY: 
   err = FAILED_TRANSACTION; 
   goto finish; 
   
  case BR_ACQUIRE_RESULT: 
   { 
    LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); 
    const int32_t result = mIn.readInt32(); 
    if (!acquireResult) continue; 
    *acquireResult = result ? NO_ERROR : INVALID_OPERATION; 
   } 
   goto finish; 
   
  case BR_REPLY: 
   { 
    binder_transaction_data tr; 
    err = mIn.read(&tr, sizeof(tr)); 
    LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); 
    if (err != NO_ERROR) goto finish; 
 
    if (reply) { 
     if ((tr.flags & TF_STATUS_CODE) == 0) { 
      reply->ipcSetDataReference( 
       reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), 
       tr.data_size, 
       reinterpret_cast<const size_t*>(tr.data.ptr.offsets), 
       tr.offsets_size/sizeof(size_t), 
       freeBuffer, this); 
     } else { 
      err = *static_cast<const status_t*>(tr.data.ptr.buffer); 
      freeBuffer(NULL, 
       reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), 
       tr.data_size, 
       reinterpret_cast<const size_t*>(tr.data.ptr.offsets), 
       tr.offsets_size/sizeof(size_t), this); 
     } 
    } else { 
     freeBuffer(NULL, 
      reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), 
      tr.data_size, 
      reinterpret_cast<const size_t*>(tr.data.ptr.offsets), 
      tr.offsets_size/sizeof(size_t), this); 
     continue; 
    } 
   } 
   goto finish; 
 
  default: 
   err = executeCommand(cmd); 
   if (err != NO_ERROR) goto finish; 
   break; 
  } 
 } 
 
finish: 
 if (err != NO_ERROR) { 
  if (acquireResult) *acquireResult = err; 
  if (reply) reply->setError(err); 
  mLastError = err; 
 } 
  
 return err; 
} 

        這個函數雖然很長,但是主要調用了talkWithDriver函數來與Binder驅動程序進行交互:

status_t IPCThreadState::talkWithDriver(bool doReceive) 
{ 
 LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened"); 
  
 binder_write_read bwr; 
  
 // Is the read buffer empty? 
 const bool needRead = mIn.dataPosition() >= mIn.dataSize(); 
  
 // We don't want to write anything if we are still reading 
 // from data left in the input buffer and the caller 
 // has requested to read the next data. 
 const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; 
  
 bwr.write_size = outAvail; 
 bwr.write_buffer = (long unsigned int)mOut.data(); 
 
 // This is what we'll read. 
 if (doReceive && needRead) { 
  bwr.read_size = mIn.dataCapacity(); 
  bwr.read_buffer = (long unsigned int)mIn.data(); 
 } else { 
  bwr.read_size = 0; 
 } 
  
 IF_LOG_COMMANDS() { 
  TextOutput::Bundle _b(alog); 
  if (outAvail != 0) { 
   alog << "Sending commands to driver: " << indent; 
   const void* cmds = (const void*)bwr.write_buffer; 
   const void* end = ((const uint8_t*)cmds)+bwr.write_size; 
   alog << HexDump(cmds, bwr.write_size) << endl; 
   while (cmds < end) cmds = printCommand(alog, cmds); 
   alog << dedent; 
  } 
  alog << "Size of receive buffer: " << bwr.read_size 
   << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; 
 } 
  
 // Return immediately if there is nothing to do. 
 if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; 
  
 bwr.write_consumed = 0; 
 bwr.read_consumed = 0; 
 status_t err; 
 do { 
  IF_LOG_COMMANDS() { 
   alog << "About to read/write, write size = " << mOut.dataSize() << endl; 
  } 
#if defined(HAVE_ANDROID_OS) 
  if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) 
   err = NO_ERROR; 
  else 
   err = -errno; 
#else 
  err = INVALID_OPERATION; 
#endif 
  IF_LOG_COMMANDS() { 
   alog << "Finished read/write, write size = " << mOut.dataSize() << endl; 
  } 
 } while (err == -EINTR); 
  
 IF_LOG_COMMANDS() { 
  alog << "Our err: " << (void*)err << ", write consumed: " 
   << bwr.write_consumed << " (of " << mOut.dataSize() 
   << "), read consumed: " << bwr.read_consumed << endl; 
 } 
 
 if (err >= NO_ERROR) { 
  if (bwr.write_consumed > 0) { 
   if (bwr.write_consumed < (ssize_t)mOut.dataSize()) 
    mOut.remove(0, bwr.write_consumed); 
   else 
    mOut.setDataSize(0); 
  } 
  if (bwr.read_consumed > 0) { 
   mIn.setDataSize(bwr.read_consumed); 
   mIn.setDataPosition(0); 
  } 
  IF_LOG_COMMANDS() { 
   TextOutput::Bundle _b(alog); 
   alog << "Remaining data size: " << mOut.dataSize() << endl; 
   alog << "Received commands from driver: " << indent; 
   const void* cmds = mIn.data(); 
   const void* end = mIn.data() + mIn.dataSize(); 
   alog << HexDump(cmds, mIn.dataSize()) << endl; 
   while (cmds < end) cmds = printReturnCommand(alog, cmds); 
   alog << dedent; 
  } 
  return NO_ERROR; 
 } 
  
 return err; 
} 

        這裡doReceive和needRead均為1,有興趣的讀者可以自已分析一下。因此,這裡告訴Binder驅動程序,先執行write操作,再執行read操作,下面我們將會看到。

        最後,通過ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)進行到Binder驅動程序的binder_ioctl函數,我們只關注cmd為BINDER_WRITE_READ的邏輯:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 
{ 
 int ret; 
 struct binder_proc *proc = filp->private_data; 
 struct binder_thread *thread; 
 unsigned int size = _IOC_SIZE(cmd); 
 void __user *ubuf = (void __user *)arg; 
 
 /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/ 
 
 ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); 
 if (ret) 
  return ret; 
 
 mutex_lock(&binder_lock); 
 thread = binder_get_thread(proc); 
 if (thread == NULL) { 
  ret = -ENOMEM; 
  goto err; 
 } 
 
 switch (cmd) { 
 case BINDER_WRITE_READ: { 
  struct binder_write_read bwr; 
  if (size != sizeof(struct binder_write_read)) { 
   ret = -EINVAL; 
   goto err; 
  } 
  if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { 
   ret = -EFAULT; 
   goto err; 
  } 
  if (binder_debug_mask & BINDER_DEBUG_READ_WRITE) 
   printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n", 
   proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer); 
  if (bwr.write_size > 0) { 
   ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); 
   if (ret < 0) { 
    bwr.read_consumed = 0; 
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) 
     ret = -EFAULT; 
    goto err; 
   } 
  } 
  if (bwr.read_size > 0) { 
   ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); 
   if (!list_empty(&proc->todo)) 
    wake_up_interruptible(&proc->wait); 
   if (ret < 0) { 
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) 
     ret = -EFAULT; 
    goto err; 
   } 
  } 
  if (binder_debug_mask & BINDER_DEBUG_READ_WRITE) 
   printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n", 
   proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size); 
  if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { 
   ret = -EFAULT; 
   goto err; 
  } 
  break; 
 } 
 ...... 
 } 
 ret = 0; 
err: 
 ...... 
 return ret; 
} 

         函數首先是將用戶傳進來的參數拷貝到本地變量struct binder_write_read bwr中去。這裡bwr.write_size > 0為true,因此,進入到binder_thread_write函數中,我們只關注BC_TRANSACTION部分的邏輯:

binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, 
     void __user *buffer, int size, signed long *consumed) 
{ 
 uint32_t cmd; 
 void __user *ptr = buffer + *consumed; 
 void __user *end = buffer + size; 
 
 while (ptr < end && thread->return_error == BR_OK) { 
  if (get_user(cmd, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
  if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) { 
   binder_stats.bc[_IOC_NR(cmd)]++; 
   proc->stats.bc[_IOC_NR(cmd)]++; 
   thread->stats.bc[_IOC_NR(cmd)]++; 
  } 
  switch (cmd) { 
   ..... 
  case BC_TRANSACTION: 
  case BC_REPLY: { 
   struct binder_transaction_data tr; 
 
   if (copy_from_user(&tr, ptr, sizeof(tr))) 
    return -EFAULT; 
   ptr += sizeof(tr); 
   binder_transaction(proc, thread, &tr, cmd == BC_REPLY); 
   break; 
  } 
  ...... 
  } 
  *consumed = ptr - buffer; 
 } 
 return 0; 
} 

         首先將用戶傳進來的transact參數拷貝在本地變量struct binder_transaction_data tr中去,接著調用binder_transaction函數進一步處理,這裡我們忽略掉無關代碼:

static void 
binder_transaction(struct binder_proc *proc, struct binder_thread *thread, 
struct binder_transaction_data *tr, int reply) 
{ 
 struct binder_transaction *t; 
 struct binder_work *tcomplete; 
 size_t *offp, *off_end; 
 struct binder_proc *target_proc; 
 struct binder_thread *target_thread = NULL; 
 struct binder_node *target_node = NULL; 
 struct list_head *target_list; 
 wait_queue_head_t *target_wait; 
 struct binder_transaction *in_reply_to = NULL; 
 struct binder_transaction_log_entry *e; 
 uint32_t return_error; 
 
  ...... 
 
 if (reply) { 
   ...... 
 } else { 
  if (tr->target.handle) { 
   ...... 
  } else { 
   target_node = binder_context_mgr_node; 
   if (target_node == NULL) { 
    return_error = BR_DEAD_REPLY; 
    goto err_no_context_mgr_node; 
   } 
  } 
  ...... 
  target_proc = target_node->proc; 
  if (target_proc == NULL) { 
   return_error = BR_DEAD_REPLY; 
   goto err_dead_binder; 
  } 
  ...... 
 } 
 if (target_thread) { 
  ...... 
 } else { 
  target_list = &target_proc->todo; 
  target_wait = &target_proc->wait; 
 } 
  
 ...... 
 
 /* TODO: reuse incoming transaction for reply */ 
 t = kzalloc(sizeof(*t), GFP_KERNEL); 
 if (t == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_alloc_t_failed; 
 } 
 ...... 
 
 tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); 
 if (tcomplete == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_alloc_tcomplete_failed; 
 } 
  
 ...... 
 
 if (!reply && !(tr->flags & TF_ONE_WAY)) 
  t->from = thread; 
 else 
  t->from = NULL; 
 t->sender_euid = proc->tsk->cred->euid; 
 t->to_proc = target_proc; 
 t->to_thread = target_thread; 
 t->code = tr->code; 
 t->flags = tr->flags; 
 t->priority = task_nice(current); 
 t->buffer = binder_alloc_buf(target_proc, tr->data_size, 
  tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); 
 if (t->buffer == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_binder_alloc_buf_failed; 
 } 
 t->buffer->allow_user_free = 0; 
 t->buffer->debug_id = t->debug_id; 
 t->buffer->transaction = t; 
 t->buffer->target_node = target_node; 
 if (target_node) 
  binder_inc_node(target_node, 1, 0, NULL); 
 
 offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); 
 
 if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { 
  ...... 
  return_error = BR_FAILED_REPLY; 
  goto err_copy_data_failed; 
 } 
 if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { 
  ...... 
  return_error = BR_FAILED_REPLY; 
  goto err_copy_data_failed; 
 } 
 ...... 
 
 off_end = (void *)offp + tr->offsets_size; 
 for (; offp < off_end; offp++) { 
  struct flat_binder_object *fp; 
  ...... 
  fp = (struct flat_binder_object *)(t->buffer->data + *offp); 
  switch (fp->type) { 
  case BINDER_TYPE_BINDER: 
  case BINDER_TYPE_WEAK_BINDER: { 
   struct binder_ref *ref; 
   struct binder_node *node = binder_get_node(proc, fp->binder); 
   if (node == NULL) { 
    node = binder_new_node(proc, fp->binder, fp->cookie); 
    if (node == NULL) { 
     return_error = BR_FAILED_REPLY; 
     goto err_binder_new_node_failed; 
    } 
    node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; 
    node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); 
   } 
   if (fp->cookie != node->cookie) { 
    ...... 
    goto err_binder_get_ref_for_node_failed; 
   } 
   ref = binder_get_ref_for_node(target_proc, node); 
   if (ref == NULL) { 
    return_error = BR_FAILED_REPLY; 
    goto err_binder_get_ref_for_node_failed; 
   } 
   if (fp->type == BINDER_TYPE_BINDER) 
    fp->type = BINDER_TYPE_HANDLE; 
   else 
    fp->type = BINDER_TYPE_WEAK_HANDLE; 
   fp->handle = ref->desc; 
   binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); 
   ...... 
        
  } break; 
  ...... 
  } 
 } 
 
 if (reply) { 
  ...... 
 } else if (!(t->flags & TF_ONE_WAY)) { 
  BUG_ON(t->buffer->async_transaction != 0); 
  t->need_reply = 1; 
  t->from_parent = thread->transaction_stack; 
  thread->transaction_stack = t; 
 } else { 
  ...... 
 } 
 t->work.type = BINDER_WORK_TRANSACTION; 
 list_add_tail(&t->work.entry, target_list); 
 tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; 
 list_add_tail(&tcomplete->entry, &thread->todo); 
 if (target_wait) 
  wake_up_interruptible(target_wait); 
 return; 
 ...... 
} 

       注意,這裡傳進來的參數reply為0,tr->target.handle也為0。因此,target_proc、target_thread、target_node、target_list和target_wait的值分別為:

target_node = binder_context_mgr_node; 
target_proc = target_node->proc; 
target_list = &target_proc->todo; 
target_wait = &target_proc->wait; 

       接著,分配了一個待處理事務t和一個待完成工作項tcomplete,並執行初始化工作:

/* TODO: reuse incoming transaction for reply */ 
t = kzalloc(sizeof(*t), GFP_KERNEL); 
if (t == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_alloc_t_failed; 
} 
...... 
 
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); 
if (tcomplete == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_alloc_tcomplete_failed; 
} 
 
...... 
 
if (!reply && !(tr->flags & TF_ONE_WAY)) 
 t->from = thread; 
else 
 t->from = NULL; 
t->sender_euid = proc->tsk->cred->euid; 
t->to_proc = target_proc; 
t->to_thread = target_thread; 
t->code = tr->code; 
t->flags = tr->flags; 
t->priority = task_nice(current); 
t->buffer = binder_alloc_buf(target_proc, tr->data_size, 
 tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); 
if (t->buffer == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_binder_alloc_buf_failed; 
} 
t->buffer->allow_user_free = 0; 
t->buffer->debug_id = t->debug_id; 
t->buffer->transaction = t; 
t->buffer->target_node = target_node; 
if (target_node) 
 binder_inc_node(target_node, 1, 0, NULL); 
 
offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); 
 
if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { 
 ...... 
 return_error = BR_FAILED_REPLY; 
 goto err_copy_data_failed; 
} 
if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { 
 ...... 
 return_error = BR_FAILED_REPLY; 
 goto err_copy_data_failed; 
} 

         注意,這裡的事務t是要交給target_proc處理的,在這個場景之下,就是Service Manager了。因此,下面的語句:

t->buffer = binder_alloc_buf(target_proc, tr->data_size, 
  tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); 

         就是在Service Manager的進程空間中分配一塊內存來保存用戶傳進入的參數了:

if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { 
 ...... 
 return_error = BR_FAILED_REPLY; 
 goto err_copy_data_failed; 
} 
if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { 
 ...... 
 return_error = BR_FAILED_REPLY; 
 goto err_copy_data_failed; 
} 

         由於現在target_node要被使用了,增加它的引用計數:

if (target_node) 
  binder_inc_node(target_node, 1, 0, NULL); 

        接下去的for循環,就是用來處理傳輸數據中的Binder對象了。在我們的場景中,有一個類型為BINDER_TYPE_BINDER的Binder實體MediaPlayerService:

 switch (fp->type) { 
 case BINDER_TYPE_BINDER: 
 case BINDER_TYPE_WEAK_BINDER: { 
struct binder_ref *ref; 
struct binder_node *node = binder_get_node(proc, fp->binder); 
if (node == NULL) { 
 node = binder_new_node(proc, fp->binder, fp->cookie); 
 if (node == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_binder_new_node_failed; 
 } 
 node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; 
 node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); 
} 
if (fp->cookie != node->cookie) { 
 ...... 
 goto err_binder_get_ref_for_node_failed; 
} 
ref = binder_get_ref_for_node(target_proc, node); 
if (ref == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_binder_get_ref_for_node_failed; 
} 
if (fp->type == BINDER_TYPE_BINDER) 
 fp->type = BINDER_TYPE_HANDLE; 
else 
 fp->type = BINDER_TYPE_WEAK_HANDLE; 
fp->handle = ref->desc; 
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); 
...... 
       
} break; 

        由於是第一次在Binder驅動程序中傳輸這個MediaPlayerService,調用binder_get_node函數查詢這個Binder實體時,會返回空,於是binder_new_node在proc中新建一個,下次就可以直接使用了。

        現在,由於要把這個Binder實體MediaPlayerService交給target_proc,也就是Service Manager來管理,也就是說Service Manager要引用這個MediaPlayerService了,於是通過binder_get_ref_for_node為MediaPlayerService創建一個引用,並且通過binder_inc_ref來增加這個引用計數,防止這個引用還在使用過程當中就被銷毀。注意,到了這裡的時候,t->buffer中的flat_binder_obj的type已經改為BINDER_TYPE_HANDLE,handle已經改為ref->desc,跟原來不一樣了,因為這個flat_binder_obj是最終是要傳給Service Manager的,而Service Manager只能夠通過句柄值來引用這個Binder實體。

        最後,把待處理事務加入到target_list列表中去:

                 list_add_tail(&t->work.entry, target_list);  

        並且把待完成工作項加入到本線程的todo等待執行列表中去:

                    list_add_tail(&tcomplete->entry, &thread->todo);  

        現在目標進程有事情可做了,於是喚醒它:

             if (target_wait)  
                          wake_up_interruptible(target_wait);   

       這裡就是要喚醒Service Manager進程了。回憶一下前面這篇文章,此時, Service Manager正在binder_t淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路hread_read函數中調用wait_event_interruptible進入休眠狀態。

       這裡我們先忽略一下Service Manager被喚醒之後的場景,繼續MedaPlayerService的啟動過程,然後再回來。

       回到binder_ioctl函數,bwr.read_size > 0為true,於是進入binder_thread_read函數:

static int 
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, 
     void __user *buffer, int size, signed long *consumed, int non_block) 
{ 
 void __user *ptr = buffer + *consumed; 
 void __user *end = buffer + size; 
 
 int ret = 0; 
 int wait_for_proc_work; 
 
 if (*consumed == 0) { 
  if (put_user(BR_NOOP, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
 } 
 
retry: 
 wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo); 
  
 ....... 
 
 if (wait_for_proc_work) { 
  ....... 
 } else { 
  if (non_block) { 
   if (!binder_has_thread_work(thread)) 
    ret = -EAGAIN; 
  } else 
   ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread)); 
 } 
  
 ...... 
 
 while (1) { 
  uint32_t cmd; 
  struct binder_transaction_data tr; 
  struct binder_work *w; 
  struct binder_transaction *t = NULL; 
 
  if (!list_empty(&thread->todo)) 
   w = list_first_entry(&thread->todo, struct binder_work, entry); 
  else if (!list_empty(&proc->todo) && wait_for_proc_work) 
   w = list_first_entry(&proc->todo, struct binder_work, entry); 
  else { 
   if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */ 
    goto retry; 
   break; 
  } 
 
  if (end - ptr < sizeof(tr) + 4) 
   break; 
 
  switch (w->type) { 
  ...... 
  case BINDER_WORK_TRANSACTION_COMPLETE: { 
   cmd = BR_TRANSACTION_COMPLETE; 
   if (put_user(cmd, (uint32_t __user *)ptr)) 
    return -EFAULT; 
   ptr += sizeof(uint32_t); 
 
   binder_stat_br(proc, thread, cmd); 
   if (binder_debug_mask & BINDER_DEBUG_TRANSACTION_COMPLETE) 
    printk(KERN_INFO "binder: %d:%d BR_TRANSACTION_COMPLETE\n", 
    proc->pid, thread->pid); 
 
   list_del(&w->entry); 
   kfree(w); 
   binder_stats.obj_deleted[BINDER_STAT_TRANSACTION_COMPLETE]++; 
            } break; 
  ...... 
  } 
 
  if (!t) 
   continue; 
 
  ...... 
 } 
 
done: 
 ...... 
 return 0; 
} 

        這裡,thread->transaction_stack和thread->todo均不為空,於是wait_for_proc_work為false,由於binder_has_thread_work的時候,返回true,這裡因為thread->todo不為空,因此,線程雖然調用了wait_event_interruptible,但是不會睡眠,於是繼續往下執行。

        由於thread->todo不為空,執行下列語句:

if (!list_empty(&thread->todo)) 
  w = list_first_entry(&thread->todo, struct binder_work, entry); 

        w->type為BINDER_WORK_TRANSACTION_COMPLETE,這是在上面的binder_transaction函數設置的,於是執行:

 switch (w->type) { 
 ...... 
 case BINDER_WORK_TRANSACTION_COMPLETE: { 
cmd = BR_TRANSACTION_COMPLETE; 
if (put_user(cmd, (uint32_t __user *)ptr)) 
 return -EFAULT; 
ptr += sizeof(uint32_t); 
 
  ...... 
list_del(&w->entry); 
kfree(w); 
   
} break; 
...... 
 } 

        這裡就將w從thread->todo刪除了。由於這裡t為空,重新執行while循環,這時由於已經沒有事情可做了,最後就返回到binder_ioctl函數中。注間,這裡一共往用戶傳進來的緩沖區buffer寫入了兩個整數,分別是BR_NOOP和BR_TRANSACTION_COMPLETE。

        binder_ioctl函數返回到用戶空間之前,把數據消耗情況拷貝回用戶空間中:

if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { 
 ret = -EFAULT; 
 goto err; 
} 

        最後返回到IPCThreadState::talkWithDriver函數中,執行下面語句: 

 if (err >= NO_ERROR) { 
  if (bwr.write_consumed > 0) { 
   if (bwr.write_consumed < (ssize_t)mOut.dataSize()) 
    mOut.remove(0, bwr.write_consumed); 
   else 
    mOut.setDataSize(0); 
  } 
  if (bwr.read_consumed > 0) { 
<pre code_snippet_id="134056" snippet_file_name="blog_20131230_54_6706870" name="code" class="cpp">   mIn.setDataSize(bwr.read_consumed); 
   mIn.setDataPosition(0);</pre>  }  ......  return NO_ERROR; } 

        首先是把mOut的數據清空:
                          mOut.setDataSize(0);  

       然後設置已經讀取的內容的大小:

                          mIn.setDataSize(bwr.read_consumed);  
                          mIn.setDataPosition(0);  

        然後返回到IPCThreadState::waitForResponse函數中。在IPCThreadState::waitForResponse函數,先是從mIn讀出一個整數,這個便是BR_NOOP了,這是一個空操作,什麼也不做。然後繼續進入IPCThreadState::talkWithDriver函數中。

        這時候,下面語句執行後:

                       const bool needRead = mIn.dataPosition() >= mIn.dataSize();  

        needRead為false,因為在mIn中,尚有一個整數BR_TRANSACTION_COMPLETE未讀出。

       這時候,下面語句執行後:

                       const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;  

        outAvail等於0。因此,最後bwr.write_size和bwr.read_size均為0,IPCThreadState::talkWithDriver函數什麼也不做,直接返回到IPCThreadState::waitForResponse函數中。在IPCThreadState::waitForResponse函數,又繼續從mIn讀出一個整數,這個便是BR_TRANSACTION_COMPLETE:

switch (cmd) { 
case BR_TRANSACTION_COMPLETE: 
  if (!reply && !acquireResult) goto finish; 
  break; 
...... 
} 

        reply不為NULL,因此,IPCThreadState::waitForResponse的循環沒有結束,繼續執行,又進入到IPCThreadState::talkWithDrive中。

        這次,needRead就為true了,而outAvail仍為0,所以bwr.read_size不為0,bwr.write_size為0。於是通過:

                       ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)  

        進入到Binder驅動程序中的binder_ioctl函數中。由於bwr.write_size為0,bwr.read_size不為0,這次直接就進入到binder_thread_read函數中。這時候,thread->transaction_stack不等於0,但是thread->todo為空,於是線程就通過:
[cpp] view plain copy 在CODE上查看代碼片派生到我的代碼片
wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  

        進入睡眠狀態,等待Service Manager來喚醒了。

        現在,我們可以回到Service Manager被喚醒的過程了。我們接著前面淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路這篇文章的最後,繼續描述。此時, Service Manager正在binder_thread_read函數中調用wait_event_interruptible_exclusive進入休眠狀態。上面被MediaPlayerService啟動後進程喚醒後,繼續執行binder_thread_read函數:

static int 
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, 
     void __user *buffer, int size, signed long *consumed, int non_block) 
{ 
 void __user *ptr = buffer + *consumed; 
 void __user *end = buffer + size; 
 
 int ret = 0; 
 int wait_for_proc_work; 
 
 if (*consumed == 0) { 
  if (put_user(BR_NOOP, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
 } 
 
retry: 
 wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo); 
 
 ...... 
 
 if (wait_for_proc_work) { 
  ...... 
  if (non_block) { 
   if (!binder_has_proc_work(proc, thread)) 
    ret = -EAGAIN; 
  } else 
   ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread)); 
 } else { 
  ...... 
 } 
  
 ...... 
 
 while (1) { 
  uint32_t cmd; 
  struct binder_transaction_data tr; 
  struct binder_work *w; 
  struct binder_transaction *t = NULL; 
 
  if (!list_empty(&thread->todo)) 
   w = list_first_entry(&thread->todo, struct binder_work, entry); 
  else if (!list_empty(&proc->todo) && wait_for_proc_work) 
   w = list_first_entry(&proc->todo, struct binder_work, entry); 
  else { 
   if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */ 
    goto retry; 
   break; 
  } 
 
  if (end - ptr < sizeof(tr) + 4) 
   break; 
 
  switch (w->type) { 
  case BINDER_WORK_TRANSACTION: { 
   t = container_of(w, struct binder_transaction, work); 
          } break; 
  ...... 
  } 
 
  if (!t) 
   continue; 
 
  BUG_ON(t->buffer == NULL); 
  if (t->buffer->target_node) { 
   struct binder_node *target_node = t->buffer->target_node; 
   tr.target.ptr = target_node->ptr; 
   tr.cookie = target_node->cookie; 
   ...... 
   cmd = BR_TRANSACTION; 
  } else { 
   ...... 
  } 
  tr.code = t->code; 
  tr.flags = t->flags; 
  tr.sender_euid = t->sender_euid; 
 
  if (t->from) { 
   struct task_struct *sender = t->from->proc->tsk; 
   tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns); 
  } else { 
   tr.sender_pid = 0; 
  } 
 
  tr.data_size = t->buffer->data_size; 
  tr.offsets_size = t->buffer->offsets_size; 
  tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset; 
  tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); 
 
  if (put_user(cmd, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
  if (copy_to_user(ptr, &tr, sizeof(tr))) 
   return -EFAULT; 
  ptr += sizeof(tr); 
 
  ...... 
 
  list_del(&t->work.entry); 
  t->buffer->allow_user_free = 1; 
  if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) { 
   t->to_parent = thread->transaction_stack; 
   t->to_thread = thread; 
   thread->transaction_stack = t; 
  } else { 
   t->buffer->transaction = NULL; 
   kfree(t); 
   binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++; 
  } 
  break; 
 } 
 
done: 
 
 ...... 
 return 0; 
} 

        Service Manager被喚醒之後,就進入while循環開始處理事務了。這裡wait_for_proc_work等於1,並且proc->todo不為空,所以從proc->todo列表中得到第一個工作項:

                   w = list_first_entry(&proc->todo, struct binder_work, entry);  

        從上面的描述中,我們知道,這個工作項的類型為BINDER_WORK_TRANSACTION,於是通過下面語句得到事務項:

                    t = container_of(w, struct binder_transaction, work);  

       接著就是把事務項t中的數據拷貝到本地局部變量struct binder_transaction_data tr中去了: 

if (t->buffer->target_node) { 
 struct binder_node *target_node = t->buffer->target_node; 
 tr.target.ptr = target_node->ptr; 
 tr.cookie = target_node->cookie; 
 ...... 
 cmd = BR_TRANSACTION; 
} else { 
 ...... 
} 
tr.code = t->code; 
tr.flags = t->flags; 
tr.sender_euid = t->sender_euid; 
 
if (t->from) { 
 struct task_struct *sender = t->from->proc->tsk; 
 tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns); 
} else { 
 tr.sender_pid = 0; 
} 
 
tr.data_size = t->buffer->data_size; 
tr.offsets_size = t->buffer->offsets_size; 
tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset; 
tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); 

        這裡有一個非常重要的地方,是Binder進程間通信機制的精髓所在:

tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset; 
tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); 


        t->buffer->data所指向的地址是內核空間的,現在要把數據返回給Service Manager進程的用戶空間,而Service Manager進程的用戶空間是不能訪問內核空間的數據的,所以這裡要作一下處理。怎麼處理呢?我們在學面向對象語言的時候,對象的拷貝有深拷貝和淺拷貝之分,深拷貝是把另外分配一塊新內存,然後把原始對象的內容搬過去,淺拷貝是並沒有為新對象分配一塊新空間,而只是分配一個引用,而個引用指向原始對象。Binder機制用的是類似淺拷貝的方法,通過在用戶空間分配一個虛擬地址,然後讓這個用戶空間虛擬地址與 t->buffer->data這個內核空間虛擬地址指向同一個物理地址,這樣就可以實現淺拷貝了。怎麼樣用戶空間和內核空間的虛擬地址同時指向同一個物理地址呢?請參考前面一篇文章淺談Service Manager成為Android進程間通信(IPC)機制Binder守護進程之路,那裡有詳細描述。這裡只要將t->buffer->data加上一個偏移值proc->user_buffer_offset就可以得到t->buffer->data對應的用戶空間虛擬地址了。調整了tr.data.ptr.buffer的值之後,不要忘記也要一起調整tr.data.ptr.offsets的值。
 

       接著就是把tr的內容拷貝到用戶傳進來的緩沖區去了,指針ptr指向這個用戶緩沖區的地址:

if (put_user(cmd, (uint32_t __user *)ptr)) 
 return -EFAULT; 
ptr += sizeof(uint32_t); 
if (copy_to_user(ptr, &tr, sizeof(tr))) 
 return -EFAULT; 
ptr += sizeof(tr); 

         這裡可以看出,這裡只是對作tr.data.ptr.bufferr和tr.data.ptr.offsets的內容作了淺拷貝。

         最後,由於已經處理了這個事務,要把它從todo列表中刪除:

list_del(&t->work.entry); 
t->buffer->allow_user_free = 1; 
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) { 
 t->to_parent = thread->transaction_stack; 
 t->to_thread = thread; 
 thread->transaction_stack = t; 
} else { 
 t->buffer->transaction = NULL; 
 kfree(t); 
 binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++; 
} 

         注意,這裡的cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)為true,表明這個事務雖然在驅動程序中已經處理完了,但是它仍然要等待Service Manager完成之後,給驅動程序一個確認,也就是需要等待回復,於是把當前事務t放在thread->transaction_stack隊列的頭部:

t->to_parent = thread->transaction_stack; 
t->to_thread = thread; 
thread->transaction_stack = t; 

         如果cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)為false,那就不需要等待回復了,直接把事務t刪掉。

         這個while最後通過一個break跳了出來,最後返回到binder_ioctl函數中:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 
{ 
 int ret; 
 struct binder_proc *proc = filp->private_data; 
 struct binder_thread *thread; 
 unsigned int size = _IOC_SIZE(cmd); 
 void __user *ubuf = (void __user *)arg; 
 
 ...... 
 
 switch (cmd) { 
 case BINDER_WRITE_READ: { 
  struct binder_write_read bwr; 
  if (size != sizeof(struct binder_write_read)) { 
   ret = -EINVAL; 
   goto err; 
  } 
  if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { 
   ret = -EFAULT; 
   goto err; 
  } 
  ...... 
  if (bwr.read_size > 0) { 
   ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); 
   if (!list_empty(&proc->todo)) 
    wake_up_interruptible(&proc->wait); 
   if (ret < 0) { 
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) 
     ret = -EFAULT; 
    goto err; 
   } 
  } 
  ...... 
  if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { 
   ret = -EFAULT; 
   goto err; 
  } 
  break; 
  } 
 ...... 
 default: 
  ret = -EINVAL; 
  goto err; 
 } 
 ret = 0; 
err: 
 ...... 
 return ret; 
} 

         從binder_thread_read返回來後,再看看proc->todo是否還有事務等待處理,如果是,就把睡眠在proc->wait隊列的線程喚醒來處理。最後,把本地變量struct binder_write_read bwr的內容拷貝回到用戶傳進來的緩沖區中,就返回了。

        這裡就是返回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函數了:

void binder_loop(struct binder_state *bs, binder_handler func) 
{ 
 int res; 
 struct binder_write_read bwr; 
 unsigned readbuf[32]; 
 
 bwr.write_size = 0; 
 bwr.write_consumed = 0; 
 bwr.write_buffer = 0; 
  
 readbuf[0] = BC_ENTER_LOOPER; 
 binder_write(bs, readbuf, sizeof(unsigned)); 
 
 for (;;) { 
  bwr.read_size = sizeof(readbuf); 
  bwr.read_consumed = 0; 
  bwr.read_buffer = (unsigned) readbuf; 
 
  res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); 
 
  if (res < 0) { 
   LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); 
   break; 
  } 
 
  res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func); 
  if (res == 0) { 
   LOGE("binder_loop: unexpected reply?!\n"); 
   break; 
  } 
  if (res < 0) { 
   LOGE("binder_loop: io error %d %s\n", res, strerror(errno)); 
   break; 
  } 
 } 
} 

       返回來的數據都放在readbuf中,接著調用binder_parse進行解析:

int binder_parse(struct binder_state *bs, struct binder_io *bio, 
     uint32_t *ptr, uint32_t size, binder_handler func) 
{ 
 int r = 1; 
 uint32_t *end = ptr + (size / 4); 
 
 while (ptr < end) { 
  uint32_t cmd = *ptr++; 
  ...... 
  case BR_TRANSACTION: { 
   struct binder_txn *txn = (void *) ptr; 
   if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) { 
    LOGE("parse: txn too small!\n"); 
    return -1; 
   } 
   binder_dump_txn(txn); 
   if (func) { 
    unsigned rdata[256/4]; 
    struct binder_io msg; 
    struct binder_io reply; 
    int res; 
 
    bio_init(&reply, rdata, sizeof(rdata), 4); 
    bio_init_from_txn(&msg, txn); 
    res = func(bs, txn, &msg, &reply); 
    binder_send_reply(bs, &reply, txn->data, res); 
   } 
   ptr += sizeof(*txn) / sizeof(uint32_t); 
   break; 
        } 
  ...... 
  default: 
   LOGE("parse: OOPS %d\n", cmd); 
   return -1; 
  } 
 } 
 
 return r; 
} 

        首先把從Binder驅動程序讀出來的數據轉換為一個struct binder_txn結構體,保存在txn本地變量中,struct binder_txn定義在frameworks/base/cmds/servicemanager/binder.h文件中:

struct binder_txn 
{ 
 void *target; 
 void *cookie; 
 uint32_t code; 
 uint32_t flags; 
 
 uint32_t sender_pid; 
 uint32_t sender_euid; 
 
 uint32_t data_size; 
 uint32_t offs_size; 
 void *data; 
 void *offs; 
}; 

       函數中還用到了另外一個數據結構struct binder_io,也是定義在frameworks/base/cmds/servicemanager/binder.h文件中:

struct binder_io 
{ 
 char *data;   /* pointer to read/write from */ 
 uint32_t *offs;  /* array of offsets */ 
 uint32_t data_avail; /* bytes available in data buffer */ 
 uint32_t offs_avail; /* entries available in offsets array */ 
 
 char *data0;   /* start of data buffer */ 
 uint32_t *offs0;  /* start of offsets buffer */ 
 uint32_t flags; 
 uint32_t unused; 
}; 

       接著往下看,函數調bio_init來初始化reply變量:

void bio_init(struct binder_io *bio, void *data, 
    uint32_t maxdata, uint32_t maxoffs) 
{ 
 uint32_t n = maxoffs * sizeof(uint32_t); 
 
 if (n > maxdata) { 
  bio->flags = BIO_F_OVERFLOW; 
  bio->data_avail = 0; 
  bio->offs_avail = 0; 
  return; 
 } 
 
 bio->data = bio->data0 = data + n; 
 bio->offs = bio->offs0 = data; 
 bio->data_avail = maxdata - n; 
 bio->offs_avail = maxoffs; 
 bio->flags = 0; 
} 

       接著又調用bio_init_from_txn來初始化msg變量:

void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn) 
{ 
 bio->data = bio->data0 = txn->data; 
 bio->offs = bio->offs0 = txn->offs; 
 bio->data_avail = txn->data_size; 
 bio->offs_avail = txn->offs_size / 4; 
 bio->flags = BIO_F_SHARED; 
} 

      最後,真正進行處理的函數是從參數中傳進來的函數指針func,這裡就是定義在frameworks/base/cmds/servicemanager/service_manager.c文件中的svcmgr_handler函數:

int svcmgr_handler(struct binder_state *bs, 
     struct binder_txn *txn, 
     struct binder_io *msg, 
     struct binder_io *reply) 
{ 
 struct svcinfo *si; 
 uint16_t *s; 
 unsigned len; 
 void *ptr; 
 uint32_t strict_policy; 
 
 if (txn->target != svcmgr_handle) 
  return -1; 
 
 // Equivalent to Parcel::enforceInterface(), reading the RPC 
 // header with the strict mode policy mask and the interface name. 
 // Note that we ignore the strict_policy and don't propagate it 
 // further (since we do no outbound RPCs anyway). 
 strict_policy = bio_get_uint32(msg); 
 s = bio_get_string16(msg, &len); 
 if ((len != (sizeof(svcmgr_id) / 2)) || 
  memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { 
   fprintf(stderr,"invalid id %s\n", str8(s)); 
   return -1; 
 } 
 
 switch(txn->code) { 
 ...... 
 case SVC_MGR_ADD_SERVICE: 
  s = bio_get_string16(msg, &len); 
  ptr = bio_get_ref(msg); 
  if (do_add_service(bs, s, len, ptr, txn->sender_euid)) 
   return -1; 
  break; 
 ...... 
 } 
 
 bio_put_uint32(reply, 0); 
 return 0; 
} 

         回憶一下,在BpServiceManager::addService時,傳給Binder驅動程序的參數為:

writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER); 
writeString16("android.os.IServiceManager"); 
writeString16("media.player"); 
writeStrongBinder(new MediaPlayerService()); 

         這裡的語句:

strict_policy = bio_get_uint32(msg); 
s = bio_get_string16(msg, &len); 
s = bio_get_string16(msg, &len); 
ptr = bio_get_ref(msg); 

         就是依次把它們讀取出來了,這裡,我們只要看一下bio_get_ref的實現。先看一個數據結構struct binder_obj的定義:

struct binder_object 
{ 
 uint32_t type; 
 uint32_t flags; 
 void *pointer; 
 void *cookie; 
}; 

        這個結構體其實就是對應struct flat_binder_obj的。
        接著看bio_get_ref實現:

void *bio_get_ref(struct binder_io *bio) 
{ 
 struct binder_object *obj; 
 
 obj = _bio_get_obj(bio); 
 if (!obj) 
  return 0; 
 
 if (obj->type == BINDER_TYPE_HANDLE) 
  return obj->pointer; 
 
 return 0; 
} 

       _bio_get_obj這個函數就不跟進去看了,它的作用就是從binder_io中取得第一個還沒取獲取過的binder_object。在這個場景下,就是我們最開始傳過來代表MediaPlayerService的flat_binder_obj了,這個原始的flat_binder_obj的type為BINDER_TYPE_BINDER,binder為指向MediaPlayerService的弱引用的地址。在前面我們說過,在Binder驅動驅動程序裡面,會把這個flat_binder_obj的type改為BINDER_TYPE_HANDLE,handle改為一個句柄值。這裡的handle值就等於obj->pointer的值。

        回到svcmgr_handler函數,調用do_add_service進一步處理:


int do_add_service(struct binder_state *bs, 
     uint16_t *s, unsigned len, 
     void *ptr, unsigned uid) 
{ 
 struct svcinfo *si; 
// LOGI("add_service('%s',%p) uid=%d\n", str8(s), ptr, uid); 
 
 if (!ptr || (len == 0) || (len > 127)) 
  return -1; 
 
 if (!svc_can_register(uid, s)) { 
  LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n", 
    str8(s), ptr, uid); 
  return -1; 
 } 
 
 si = find_svc(s, len); 
 if (si) { 
  if (si->ptr) { 
   LOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED\n", 
     str8(s), ptr, uid); 
   return -1; 
  } 
  si->ptr = ptr; 
 } else { 
  si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t)); 
  if (!si) { 
   LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n", 
     str8(s), ptr, uid); 
   return -1; 
  } 
  si->ptr = ptr; 
  si->len = len; 
  memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); 
  si->name[len] = '\0'; 
  si->death.func = svcinfo_death; 
  si->death.ptr = si; 
  si->next = svclist; 
  svclist = si; 
 } 
 
 binder_acquire(bs, ptr); 
 binder_link_to_death(bs, ptr, &si->death); 
 return 0; 
} 

        這個函數的實現很簡單,就是把MediaPlayerService這個Binder實體的引用寫到一個struct svcinfo結構體中,主要是它的名稱和句柄值,然後插入到鏈接svclist的頭部去。這樣,Client來向Service Manager查詢服務接口時,只要給定服務名稱,Service Manger就可以返回相應的句柄值了。

        這個函數執行完成後,返回到svcmgr_handler函數,函數的最後,將一個錯誤碼0寫到reply變量中去,表示一切正常:

             bio_put_uint32(reply, 0);  

       svcmgr_handler函數執行完成後,返回到binder_parse函數,執行下面語句:

               binder_send_reply(bs, &reply, txn->data, res);  

       我們看一下binder_send_reply的實現,從函數名就可以猜到它要做什麼了,告訴Binder驅動程序,它完成了Binder驅動程序交給它的任務了。

void binder_send_reply(struct binder_state *bs, 
      struct binder_io *reply, 
      void *buffer_to_free, 
      int status) 
{ 
 struct { 
  uint32_t cmd_free; 
  void *buffer; 
  uint32_t cmd_reply; 
  struct binder_txn txn; 
 } __attribute__((packed)) data; 
 
 data.cmd_free = BC_FREE_BUFFER; 
 data.buffer = buffer_to_free; 
 data.cmd_reply = BC_REPLY; 
 data.txn.target = 0; 
 data.txn.cookie = 0; 
 data.txn.code = 0; 
 if (status) { 
  data.txn.flags = TF_STATUS_CODE; 
  data.txn.data_size = sizeof(int); 
  data.txn.offs_size = 0; 
  data.txn.data = &status; 
  data.txn.offs = 0; 
 } else { 
  data.txn.flags = 0; 
  data.txn.data_size = reply->data - reply->data0; 
  data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0); 
  data.txn.data = reply->data0; 
  data.txn.offs = reply->offs0; 
 } 
 binder_write(bs, &data, sizeof(data)); 
} 

       從這裡可以看出,binder_send_reply告訴Binder驅動程序執行BC_FREE_BUFFER和BC_REPLY命令,前者釋放之前在binder_transaction分配的空間,地址為buffer_to_free,buffer_to_free這個地址是Binder驅動程序把自己在內核空間用的地址轉換成用戶空間地址再傳給Service Manager的,所以Binder驅動程序拿到這個地址後,知道怎麼樣釋放這個空間;後者告訴MediaPlayerService,它的addService操作已經完成了,錯誤碼是0,保存在data.txn.data中。

       再來看binder_write函數:

int binder_write(struct binder_state *bs, void *data, unsigned len) 
{ 
 struct binder_write_read bwr; 
 int res; 
 bwr.write_size = len; 
 bwr.write_consumed = 0; 
 bwr.write_buffer = (unsigned) data; 
 bwr.read_size = 0; 
 bwr.read_consumed = 0; 
 bwr.read_buffer = 0; 
 res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); 
 if (res < 0) { 
  fprintf(stderr,"binder_write: ioctl failed (%s)\n", 
    strerror(errno)); 
 } 
 return res; 
} 

       這裡可以看出,只有寫操作,沒有讀操作,即read_size為0。

       這裡又是一個ioctl的BINDER_WRITE_READ操作。直入到驅動程序的binder_ioctl函數後,執行BINDER_WRITE_READ命令,這裡就不累述了。

       最後,從binder_ioctl執行到binder_thread_write函數,我們首先看第一個命令BC_FREE_BUFFER:

int 
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, 
     void __user *buffer, int size, signed long *consumed) 
{ 
 uint32_t cmd; 
 void __user *ptr = buffer + *consumed; 
 void __user *end = buffer + size; 
 
 while (ptr < end && thread->return_error == BR_OK) { 
  if (get_user(cmd, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
  if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) { 
   binder_stats.bc[_IOC_NR(cmd)]++; 
   proc->stats.bc[_IOC_NR(cmd)]++; 
   thread->stats.bc[_IOC_NR(cmd)]++; 
  } 
  switch (cmd) { 
  ...... 
  case BC_FREE_BUFFER: { 
   void __user *data_ptr; 
   struct binder_buffer *buffer; 
 
   if (get_user(data_ptr, (void * __user *)ptr)) 
    return -EFAULT; 
   ptr += sizeof(void *); 
 
   buffer = binder_buffer_lookup(proc, data_ptr); 
   if (buffer == NULL) { 
    binder_user_error("binder: %d:%d " 
     "BC_FREE_BUFFER u%p no match\n", 
     proc->pid, thread->pid, data_ptr); 
    break; 
   } 
   if (!buffer->allow_user_free) { 
    binder_user_error("binder: %d:%d " 
     "BC_FREE_BUFFER u%p matched " 
     "unreturned buffer\n", 
     proc->pid, thread->pid, data_ptr); 
    break; 
   } 
   if (binder_debug_mask & BINDER_DEBUG_FREE_BUFFER) 
    printk(KERN_INFO "binder: %d:%d BC_FREE_BUFFER u%p found buffer %d for %s transaction\n", 
    proc->pid, thread->pid, data_ptr, buffer->debug_id, 
    buffer->transaction ? "active" : "finished"); 
 
   if (buffer->transaction) { 
    buffer->transaction->buffer = NULL; 
    buffer->transaction = NULL; 
   } 
   if (buffer->async_transaction && buffer->target_node) { 
    BUG_ON(!buffer->target_node->has_async_transaction); 
    if (list_empty(&buffer->target_node->async_todo)) 
     buffer->target_node->has_async_transaction = 0; 
    else 
     list_move_tail(buffer->target_node->async_todo.next, &thread->todo); 
   } 
   binder_transaction_buffer_release(proc, buffer, NULL); 
   binder_free_buf(proc, buffer); 
   break; 
        } 
 
  ...... 
  *consumed = ptr - buffer; 
 } 
 return 0; 
} 

       首先通過看這個語句:

get_user(data_ptr, (void * __user *)ptr) 

       這個是獲得要刪除的Buffer的用戶空間地址,接著通過下面這個語句來找到這個地址對應的struct binder_buffer信息:

            buffer = binder_buffer_lookup(proc, data_ptr);  

       因為這個空間是前面在binder_transaction裡面分配的,所以這裡一定能找到。

       最後,就可以釋放這塊內存了:

            binder_transaction_buffer_release(proc, buffer, NULL);  
            binder_free_buf(proc, buffer);  

       再來看另外一個命令BC_REPLY:

int 
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, 
     void __user *buffer, int size, signed long *consumed) 
{ 
 uint32_t cmd; 
 void __user *ptr = buffer + *consumed; 
 void __user *end = buffer + size; 
 
 while (ptr < end && thread->return_error == BR_OK) { 
  if (get_user(cmd, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
  if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) { 
   binder_stats.bc[_IOC_NR(cmd)]++; 
   proc->stats.bc[_IOC_NR(cmd)]++; 
   thread->stats.bc[_IOC_NR(cmd)]++; 
  } 
  switch (cmd) { 
  ...... 
  case BC_TRANSACTION: 
  case BC_REPLY: { 
   struct binder_transaction_data tr; 
 
   if (copy_from_user(&tr, ptr, sizeof(tr))) 
    return -EFAULT; 
   ptr += sizeof(tr); 
   binder_transaction(proc, thread, &tr, cmd == BC_REPLY); 
   break; 
      } 
 
  ...... 
  *consumed = ptr - buffer; 
 } 
 return 0; 
} 

       又再次進入到binder_transaction函數:

static void 
binder_transaction(struct binder_proc *proc, struct binder_thread *thread, 
struct binder_transaction_data *tr, int reply) 
{ 
 struct binder_transaction *t; 
 struct binder_work *tcomplete; 
 size_t *offp, *off_end; 
 struct binder_proc *target_proc; 
 struct binder_thread *target_thread = NULL; 
 struct binder_node *target_node = NULL; 
 struct list_head *target_list; 
 wait_queue_head_t *target_wait; 
 struct binder_transaction *in_reply_to = NULL; 
 struct binder_transaction_log_entry *e; 
 uint32_t return_error; 
 
 ...... 
 
 if (reply) { 
  in_reply_to = thread->transaction_stack; 
  if (in_reply_to == NULL) { 
   ...... 
   return_error = BR_FAILED_REPLY; 
   goto err_empty_call_stack; 
  } 
  binder_set_nice(in_reply_to->saved_priority); 
  if (in_reply_to->to_thread != thread) { 
   ....... 
   goto err_bad_call_stack; 
  } 
  thread->transaction_stack = in_reply_to->to_parent; 
  target_thread = in_reply_to->from; 
  if (target_thread == NULL) { 
   return_error = BR_DEAD_REPLY; 
   goto err_dead_binder; 
  } 
  if (target_thread->transaction_stack != in_reply_to) { 
   ...... 
   return_error = BR_FAILED_REPLY; 
   in_reply_to = NULL; 
   target_thread = NULL; 
   goto err_dead_binder; 
  } 
  target_proc = target_thread->proc; 
 } else { 
  ...... 
 } 
 if (target_thread) { 
  e->to_thread = target_thread->pid; 
  target_list = &target_thread->todo; 
  target_wait = &target_thread->wait; 
 } else { 
  ...... 
 } 
 
 
 /* TODO: reuse incoming transaction for reply */ 
 t = kzalloc(sizeof(*t), GFP_KERNEL); 
 if (t == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_alloc_t_failed; 
 } 
  
 
 tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); 
 if (tcomplete == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_alloc_tcomplete_failed; 
 } 
 
 if (!reply && !(tr->flags & TF_ONE_WAY)) 
  t->from = thread; 
 else 
  t->from = NULL; 
 t->sender_euid = proc->tsk->cred->euid; 
 t->to_proc = target_proc; 
 t->to_thread = target_thread; 
 t->code = tr->code; 
 t->flags = tr->flags; 
 t->priority = task_nice(current); 
 t->buffer = binder_alloc_buf(target_proc, tr->data_size, 
  tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); 
 if (t->buffer == NULL) { 
  return_error = BR_FAILED_REPLY; 
  goto err_binder_alloc_buf_failed; 
 } 
 t->buffer->allow_user_free = 0; 
 t->buffer->debug_id = t->debug_id; 
 t->buffer->transaction = t; 
 t->buffer->target_node = target_node; 
 if (target_node) 
  binder_inc_node(target_node, 1, 0, NULL); 
 
 offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); 
 
 if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { 
  binder_user_error("binder: %d:%d got transaction with invalid " 
   "data ptr\n", proc->pid, thread->pid); 
  return_error = BR_FAILED_REPLY; 
  goto err_copy_data_failed; 
 } 
 if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { 
  binder_user_error("binder: %d:%d got transaction with invalid " 
   "offsets ptr\n", proc->pid, thread->pid); 
  return_error = BR_FAILED_REPLY; 
  goto err_copy_data_failed; 
 } 
  
 ...... 
 
 if (reply) { 
  BUG_ON(t->buffer->async_transaction != 0); 
  binder_pop_transaction(target_thread, in_reply_to); 
 } else if (!(t->flags & TF_ONE_WAY)) { 
  ...... 
 } else { 
  ...... 
 } 
 t->work.type = BINDER_WORK_TRANSACTION; 
 list_add_tail(&t->work.entry, target_list); 
 tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; 
 list_add_tail(&tcomplete->entry, &thread->todo); 
 if (target_wait) 
  wake_up_interruptible(target_wait); 
 return; 
 ...... 
} 

       注意,這裡的reply為1,我們忽略掉其它無關代碼。

       前面Service Manager正在binder_thread_read函數中被MediaPlayerService啟動後進程喚醒後,在最後會把當前處理完的事務放在thread->transaction_stack中:

if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) { 
 t->to_parent = thread->transaction_stack; 
 t->to_thread = thread; 
 thread->transaction_stack = t; 
} 

       所以,這裡,首先是把它這個binder_transaction取回來,並且放在本地變量in_reply_to中:

                      in_reply_to = thread->transaction_stack;  

       接著就可以通過in_reply_to得到最終發出這個事務請求的線程和進程:

                      target_thread = in_reply_to->from; 
                      target_proc = target_thread->proc;  

        然後得到target_list和target_wait:

                    target_list = &target_thread->todo;  
                    target_wait = &target_thread->wait;  

       下面這一段代碼:

/* TODO: reuse incoming transaction for reply */ 
t = kzalloc(sizeof(*t), GFP_KERNEL); 
if (t == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_alloc_t_failed; 
} 
 
 
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); 
if (tcomplete == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_alloc_tcomplete_failed; 
} 
 
if (!reply && !(tr->flags & TF_ONE_WAY)) 
 t->from = thread; 
else 
 t->from = NULL; 
t->sender_euid = proc->tsk->cred->euid; 
t->to_proc = target_proc; 
t->to_thread = target_thread; 
t->code = tr->code; 
t->flags = tr->flags; 
t->priority = task_nice(current); 
t->buffer = binder_alloc_buf(target_proc, tr->data_size, 
 tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); 
if (t->buffer == NULL) { 
 return_error = BR_FAILED_REPLY; 
 goto err_binder_alloc_buf_failed; 
} 
t->buffer->allow_user_free = 0; 
t->buffer->debug_id = t->debug_id; 
t->buffer->transaction = t; 
t->buffer->target_node = target_node; 
if (target_node) 
 binder_inc_node(target_node, 1, 0, NULL); 
 
offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); 
 
if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { 
 binder_user_error("binder: %d:%d got transaction with invalid " 
  "data ptr\n", proc->pid, thread->pid); 
 return_error = BR_FAILED_REPLY; 
 goto err_copy_data_failed; 
} 
if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { 
 binder_user_error("binder: %d:%d got transaction with invalid " 
  "offsets ptr\n", proc->pid, thread->pid); 
 return_error = BR_FAILED_REPLY; 
 goto err_copy_data_failed; 
} 

          我們在前面已經分析過了,這裡不再重復。但是有一點要注意的是,這裡target_node為NULL,因此,t->buffer->target_node也為NULL。

          函數本來有一個for循環,用來處理數據中的Binder對象,這裡由於沒有Binder對象,所以就略過了。到了下面這句代碼:

                   binder_pop_transaction(target_thread, in_reply_to);  

          我們看看做了什麼事情: 

static void 
binder_pop_transaction( 
 struct binder_thread *target_thread, struct binder_transaction *t) 
{ 
 if (target_thread) { 
  BUG_ON(target_thread->transaction_stack != t); 
  BUG_ON(target_thread->transaction_stack->from != target_thread); 
  target_thread->transaction_stack = 
   target_thread->transaction_stack->from_parent; 
  t->from = NULL; 
 } 
 t->need_reply = 0; 
 if (t->buffer) 
  t->buffer->transaction = NULL; 
 kfree(t); 
 binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++; 
} 

        由於到了這裡,已經不需要in_reply_to這個transaction了,就把它刪掉。

        回到binder_transaction函數:   

t->work.type = BINDER_WORK_TRANSACTION; 
list_add_tail(&t->work.entry, target_list); 
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; 
list_add_tail(&tcomplete->entry, &thread->todo); 

和前面一樣,分別把t和tcomplete分別放在target_list和thread->todo隊列中,這裡的target_list指的就是最初調用IServiceManager::addService的MediaPlayerService的Server主線程的的thread->todo隊列了,而thread->todo指的是Service Manager中用來回復IServiceManager::addService請求的線程。

        最後,喚醒等待在target_wait隊列上的線程了,就是最初調用IServiceManager::addService的MediaPlayerService的Server主線程了,它最後在binder_thread_read函數中睡眠在thread->wait上,就是這裡的target_wait了:

                  if (target_wait)  
                              wake_up_interruptible(target_wait);  

        這樣,Service Manger回復調用IServiceManager::addService請求就算完成了,重新回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函數等待下一個Client請求的到來。事實上,Service Manger回到binder_loop函數再次執行ioctl函數時候,又會再次進入到binder_thread_read函數。這時個會發現thread->todo不為空,這是因為剛才我們調用了:

                 list_add_tail(&tcomplete->entry, &thread->todo);  

         把一個工作項tcompelete放在了在thread->todo中,這個tcompelete的type為BINDER_WORK_TRANSACTION_COMPLETE,因此,Binder驅動程序會執行下面操作:

switch (w->type) { 
case BINDER_WORK_TRANSACTION_COMPLETE: { 
 cmd = BR_TRANSACTION_COMPLETE; 
 if (put_user(cmd, (uint32_t __user *)ptr)) 
  return -EFAULT; 
 ptr += sizeof(uint32_t); 
 
 list_del(&w->entry); 
 kfree(w); 
  
 } break; 
 ...... 
} 


        binder_loop函數執行完這個ioctl調用後,才會在下一次調用ioctl進入到Binder驅動程序進入休眠狀態,等待下一次Client的請求。

        上面講到調用IServiceManager::addService的MediaPlayerService的Server主線程被喚醒了,於是,重新執行binder_thread_read函數:

static int 
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, 
     void __user *buffer, int size, signed long *consumed, int non_block) 
{ 
 void __user *ptr = buffer + *consumed; 
 void __user *end = buffer + size; 
 
 int ret = 0; 
 int wait_for_proc_work; 
 
 if (*consumed == 0) { 
  if (put_user(BR_NOOP, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
 } 
 
retry: 
 wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo); 
 
 ...... 
 
 if (wait_for_proc_work) { 
  ...... 
 } else { 
  if (non_block) { 
   if (!binder_has_thread_work(thread)) 
    ret = -EAGAIN; 
  } else 
   ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread)); 
 } 
  
 ...... 
 
 while (1) { 
  uint32_t cmd; 
  struct binder_transaction_data tr; 
  struct binder_work *w; 
  struct binder_transaction *t = NULL; 
 
  if (!list_empty(&thread->todo)) 
   w = list_first_entry(&thread->todo, struct binder_work, entry); 
  else if (!list_empty(&proc->todo) && wait_for_proc_work) 
   w = list_first_entry(&proc->todo, struct binder_work, entry); 
  else { 
   if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */ 
    goto retry; 
   break; 
  } 
 
  ...... 
 
  switch (w->type) { 
  case BINDER_WORK_TRANSACTION: { 
   t = container_of(w, struct binder_transaction, work); 
          } break; 
  ...... 
  } 
 
  if (!t) 
   continue; 
 
  BUG_ON(t->buffer == NULL); 
  if (t->buffer->target_node) { 
   ...... 
  } else { 
   tr.target.ptr = NULL; 
   tr.cookie = NULL; 
   cmd = BR_REPLY; 
  } 
  tr.code = t->code; 
  tr.flags = t->flags; 
  tr.sender_euid = t->sender_euid; 
 
  if (t->from) { 
   ...... 
  } else { 
   tr.sender_pid = 0; 
  } 
 
  tr.data_size = t->buffer->data_size; 
  tr.offsets_size = t->buffer->offsets_size; 
  tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset; 
  tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); 
 
  if (put_user(cmd, (uint32_t __user *)ptr)) 
   return -EFAULT; 
  ptr += sizeof(uint32_t); 
  if (copy_to_user(ptr, &tr, sizeof(tr))) 
   return -EFAULT; 
  ptr += sizeof(tr); 
 
  ...... 
 
  list_del(&t->work.entry); 
  t->buffer->allow_user_free = 1; 
  if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) { 
   ...... 
  } else { 
   t->buffer->transaction = NULL; 
   kfree(t); 
   binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++; 
  } 
  break; 
 } 
 
done: 
 ...... 
 return 0; 
} 

         在while循環中,從thread->todo得到w,w->type為BINDER_WORK_TRANSACTION,於是,得到t。從上面可以知道,Service Manager反回了一個0回來,寫在t->buffer->data裡面,現在把t->buffer->data加上proc->user_buffer_offset,得到用戶空間地址,保存在tr.data.ptr.buffer裡面,這樣用戶空間就可以訪問這個返回碼了。由於cmd不等於BR_TRANSACTION,這時就可以把t刪除掉了,因為以後都不需要用了。

         執行完這個函數後,就返回到binder_ioctl函數,執行下面語句,把數據返回給用戶空間:

if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { 
 ret = -EFAULT; 
 goto err; 
} 

         接著返回到用戶空間IPCThreadState::talkWithDriver函數,最後返回到IPCThreadState::waitForResponse函數,最終執行到下面語句:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) 
{ 
 int32_t cmd; 
 int32_t err; 
 
 while (1) { 
  if ((err=talkWithDriver()) < NO_ERROR) break; 
   
  ...... 
 
  cmd = mIn.readInt32(); 
 
  ...... 
 
  switch (cmd) { 
  ...... 
  case BR_REPLY: 
   { 
    binder_transaction_data tr; 
    err = mIn.read(&tr, sizeof(tr)); 
    LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); 
    if (err != NO_ERROR) goto finish; 
 
    if (reply) { 
     if ((tr.flags & TF_STATUS_CODE) == 0) { 
      reply->ipcSetDataReference( 
       reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), 
       tr.data_size, 
       reinterpret_cast<const size_t*>(tr.data.ptr.offsets), 
       tr.offsets_size/sizeof(size_t), 
       freeBuffer, this); 
     } else { 
      ...... 
     } 
    } else { 
     ...... 
    } 
   } 
   goto finish; 
 
  ...... 
  } 
 } 
 
finish: 
 ...... 
 return err; 
} 

        注意,這裡的tr.flags等於0,這個是在上面的binder_send_reply函數裡設置的。最終把結果保存在reply了:

reply->ipcSetDataReference( 
  reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), 
  tr.data_size, 
  reinterpret_cast<const size_t*>(tr.data.ptr.offsets), 
  tr.offsets_size/sizeof(size_t), 
  freeBuffer, this); 

       這個函數我們就不看了,有興趣的讀者可以研究一下。

       從這裡層層返回,最後回到MediaPlayerService::instantiate函數中。

       至此,IServiceManager::addService終於執行完畢了。這個過程非常復雜,但是如果我們能夠深刻地理解這一過程,將能很好地理解Binder機制的設計思想和實現過程。這裡,對IServiceManager::addService過程中MediaPlayerService、ServiceManager和BinderDriver之間的交互作一個小結:

        回到frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函數,接下去還要執行下面兩個函數:

                 ProcessState::self()->startThreadPool(); 
                 IPCThreadState::self()->joinThreadPool();  

        首先看ProcessState::startThreadPool函數的實現:

void ProcessState::startThreadPool() 
{ 
 AutoMutex _l(mLock); 
 if (!mThreadPoolStarted) { 
  mThreadPoolStarted = true; 
  spawnPooledThread(true); 
 } 
} 

       這裡調用spwanPooledThread:

void ProcessState::spawnPooledThread(bool isMain) 
{ 
 if (mThreadPoolStarted) { 
  int32_t s = android_atomic_add(1, &mThreadPoolSeq); 
  char buf[32]; 
  sprintf(buf, "Binder Thread #%d", s); 
  LOGV("Spawning new pooled thread, name=%s\n", buf); 
  sp<Thread> t = new PoolThread(isMain); 
  t->run(buf); 
 } 
} 

       這裡主要是創建一個線程,PoolThread繼續Thread類,Thread類定義在frameworks/base/libs/utils/Threads.cpp文件中,其run函數最終調用子類的threadLoop函數,這裡即為PoolThread::threadLoop函數:

virtual bool threadLoop() 
{ 
 IPCThreadState::self()->joinThreadPool(mIsMain); 
 return false; 
} 

       這裡和frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函數一樣,最終都是調用了IPCThreadState::joinThreadPool函數,它們的區別是,一個參數是true,一個是默認值false。我們來看一下這個函數的實現:

void IPCThreadState::joinThreadPool(bool isMain) 
{ 
 LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); 
 
 mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); 
 
 ...... 
 
 status_t result; 
 do { 
  int32_t cmd; 
 
  ....... 
 
  // now get the next command to be processed, waiting if necessary 
  result = talkWithDriver(); 
  if (result >= NO_ERROR) { 
   size_t IN = mIn.dataAvail(); 
   if (IN < sizeof(int32_t)) continue; 
   cmd = mIn.readInt32(); 
   ...... 
   } 
 
   result = executeCommand(cmd); 
  } 
 
  ...... 
 } while (result != -ECONNREFUSED && result != -EBADF); 
 
 ....... 
 
 mOut.writeInt32(BC_EXIT_LOOPER); 
 talkWithDriver(false); 
} 

        這個函數最終是在一個無窮循環中,通過調用talkWithDriver函數來和Binder驅動程序進行交互,實際上就是調用talkWithDriver來等待Client的請求,然後再調用executeCommand來處理請求,而在executeCommand函數中,最終會調用BBinder::transact來真正處理Client的請求:

status_t IPCThreadState::executeCommand(int32_t cmd) 
{ 
 BBinder* obj; 
 RefBase::weakref_type* refs; 
 status_t result = NO_ERROR; 
 
 switch (cmd) { 
 ...... 
 
 case BR_TRANSACTION: 
  { 
   binder_transaction_data tr; 
   result = mIn.read(&tr, sizeof(tr)); 
    
   ...... 
 
   Parcel reply; 
    
   ...... 
 
   if (tr.target.ptr) { 
    sp<BBinder> b((BBinder*)tr.cookie); 
    const status_t error = b->transact(tr.code, buffer, &reply, tr.flags); 
    if (error < NO_ERROR) reply.setError(error); 
 
   } else { 
    const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); 
    if (error < NO_ERROR) reply.setError(error); 
   } 
 
   ...... 
  } 
  break; 
 
 ....... 
 } 
 
 if (result != NO_ERROR) { 
  mLastError = result; 
 } 
 
 return result; 
} 

        接下來再看一下BBinder::transact的實現:

status_t BBinder::transact( 
 uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 
{ 
 data.setDataPosition(0); 
 
 status_t err = NO_ERROR; 
 switch (code) { 
  case PING_TRANSACTION: 
   reply->writeInt32(pingBinder()); 
   break; 
  default: 
   err = onTransact(code, data, reply, flags); 
   break; 
 } 
 
 if (reply != NULL) { 
  reply->setDataPosition(0); 
 } 
 
 return err; 
} 

       最終會調用onTransact函數來處理。在這個場景中,BnMediaPlayerService繼承了BBinder類,並且重載了onTransact函數,因此,這裡實際上是調用了BnMediaPlayerService::onTransact函數,這個函數定義在frameworks/base/libs/media/libmedia/IMediaPlayerService.cpp文件中:

status_t BnMediaPlayerService::onTransact( 
 uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 
{ 
 switch(code) { 
  case CREATE_URL: { 
   ...... 
       } break; 
  case CREATE_FD: { 
   ...... 
      } break; 
  case DECODE_URL: { 
   ...... 
       } break; 
  case DECODE_FD: { 
   ...... 
      } break; 
  case CREATE_MEDIA_RECORDER: { 
   ...... 
         } break; 
  case CREATE_METADATA_RETRIEVER: { 
   ...... 
          } break; 
  case GET_OMX: { 
   ...... 
      } break; 
  default: 
   return BBinder::onTransact(code, data, reply, flags); 
 } 
} 

       至此,我們就以MediaPlayerService為例,完整地介紹了Android系統進程間通信Binder機制中的Server啟動過程。Server啟動起來之後,就會在一個無窮循環中等待Client的請求了。在下一篇文章中,我們將介紹Client如何通過Service Manager遠程接口來獲得Server遠程接口,進而調用Server遠程接口來使用Server提供的服務,敬請關注。

  1. 上一頁:
  2. 下一頁:
熱門文章
閱讀排行版
Copyright © Android教程網 All Rights Reserved