Android教程網
  1. 首頁
  2. Android 技術
  3. Android 手機
  4. Android 系統教程
  5. Android 游戲
 Android教程網 >> Android技術 >> 關於Android編程 >> Volley源碼分析【面向接口編程的典范】

Volley源碼分析【面向接口編程的典范】

編輯:關於Android編程

基本原理

Volley采用生產者消費者模型,生產者(Volley的使用者)通過調用add方法給請求隊列添加請求,緩存調度器和網絡調度器作為消費者從請求隊列取出請求處理,根據不同情況決定走緩存還是走網絡請求數據,最後切換線程,將請求的數據回調給UI線程。

創建請求隊列

Volley通過靜態工廠方法newRequestQueue生成一個請求隊列RequestQueue

    public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();

        return queue;
    }

根據應用的包名和版本號創建了userAgent
根據android的版本號創建http棧HttpStack
依據http棧創建一個BasicNetwork對象,接著創建一個DiskBasedCache對象

有了緩存和網路,就可以創建請求隊列RequestQueue了

   RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);

然後調用請求隊列的start方法啟動緩存調度器和網絡調度器,消費者開始工作,不停地從請求隊列中取出請求處理,請求隊列裡邊沒有請求就阻塞。

啟動緩存調度器和網絡調度器

    public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

可見請求隊列的start方法啟動了一個緩存調度器線程和若干個(默認4個)網絡調度器線程

緩存調度器CacheDispatcher分析

緩存調度器繼承自Thread,因此實際上是一個線程

public class CacheDispatcher extends Thread

CacheDispatcher充分體現了面向接口編程的精髓,CacheDispatcher所依賴的屬性全部是接口,而不是具體的實現,通過構造函數進行以來注入

    /** The queue of requests coming in for triage. */
    private final BlockingQueue> mCacheQueue;

    /** The queue of requests going out to the network. */
    private final BlockingQueue> mNetworkQueue;

    /** The cache to read from. */
    private final Cache mCache;

    /** For posting responses. */
    private final ResponseDelivery mDelivery;

BlockingQueue、Cache、ResponseDelivery均為接口而不是具體的實現,這樣降低了類之間的耦合,提高了編程的靈活性。

   public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request request = mCacheQueue.take();
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }

緩存調度器線程在設置了線程優先級和初始化了緩存後就進入死循環,從mCacheQueue不斷地獲取請求並進行處理。
如果取出的請求已經被取消了,那麼結束此請求,跳過本次循環,接著從隊列中取出下一個請求處理

      if (request.isCanceled()) {
          request.finish("cache-discard-canceled");
          continue;
      }

如果取出的請求沒有被取消,就在緩存中查找是否已經緩存該請求,緩存不命中,就把此請求丟到網絡請求隊列,從網絡獲取數據,然後跳出本次循環,接著從緩存請求隊列讀取下一個請求處理

      // Attempt to retrieve this item from cache.
      Cache.Entry entry = mCache.get(request.getCacheKey());
      if (entry == null) {
          request.addMarker("cache-miss");
          // Cache miss; send off to the network dispatcher.
          mNetworkQueue.put(request);
          continue;
      }

如果緩存命中了,但是緩存已經完全過期了,那麼還是得重新從網絡請求數據

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

如果緩存沒有過期,並且不需要刷新,那麼直接將緩存中的數據post給UI線程,避免了一次網絡請求

        if (!entry.refreshNeeded()) {
           // Completely unexpired cache hit. Just deliver the response.
           mDelivery.postResponse(request, response);
        } 

如果緩存沒有過期,但是需要刷新,這時候據先把緩存post給UI線程,然後後台去默默地請求網絡數據,網絡調度器會在請求到最新數據後刷新緩存和UI數據。

            else {
               // Soft-expired cache hit. We can deliver the cached response,
               // but we need to also send the request to the network for
               // refreshing.
               request.addMarker("cache-hit-refresh-needed");
               request.setCacheEntry(entry);

               // Mark the response as intermediate.
               response.intermediate = true;

               // Post the intermediate response back to the user and have
               // the delivery then forward the request along to the network.
               mDelivery.postResponse(request, response, new Runnable() {
                   @Override
                   public void run() {
                       try {
                           mNetworkQueue.put(request);
                       } catch (InterruptedException e) {
                           // Not much we can do about this.
                       }
                   }
               });
           }
    /**
     * Parses a response from the network or cache and delivers it. The provided
     * Runnable will be executed after delivery.
     */
    public void postResponse(Request request, Response response, Runnable runnable);

postResponse方法會先將緩存中的數據deliver給UI線程,然後執行Runnable,也就是 mNetworkQueue.put(request),通過網絡獲取最新數據

網絡調度器NetworkDispatcher分析

NetworkDispatcher網絡調度器CacheDispatcher緩存調度器非常類似,都是繼承自Thread,都是面向接口編程的典范。

    /** The queue of requests to service. */
    private final BlockingQueue> mQueue;
    /** The network interface for processing requests. */
    private final Network mNetwork;
    /** The cache to write to. */
    private final Cache mCache;
    /** For posting responses and errors. */
    private final ResponseDelivery mDelivery;

網絡調度器從網絡請求隊列mQueue中不斷獲取請求,通過mNetwork獲取網絡最新數據,把最新數據寫入緩存mCache,通過mDelivery切換線程,把數據回調給UI線程

   public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            Request request;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                Response response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

網絡調度器線程在設置了線程優先級後就進入死循環,不斷從網絡請求隊列讀取請求進行處理
如果取出的請求已被取消,那麼結束此請求,結束本次循環,從隊列讀取下一個請求進行處理

     // If the request was cancelled already, do not perform the
     // network request.
     if (request.isCanceled()) {
         request.finish("network-discard-cancelled");
         continue;
     }

該請求沒有被取消的話,通過mNetwork進行網絡數據訪問

    // Perform the network request.
    NetworkResponse networkResponse = mNetwork.performRequest(request);

服務器返回304,表示請求的資源和上次相比沒有修改。
如果服務器返回304,並且該請求已經之前已經請求到了response,那麼之前的response可以復用,沒必要重新網絡請求,結束本次循環,從網絡請求隊列獲取下一條請求進行處理

    // If the server returned 304 AND we delivered a response already,
    // we're done -- don't deliver a second identical response.
    if (networkResponse.notModified && request.hasHadResponseDelivered()) {
        request.finish("not-modified");
        continue;
    }

/**
* Returns true if this request has had a response delivered for it.
*/
public boolean hasHadResponseDelivered()

如果服務器返回的狀態碼不是304,說明請求的數據在服務器上已經發生了變化,需要重新並解析獲取response

    // Parse the response here on the worker thread.
    Response response = request.parseNetworkResponse(networkResponse);

如果該request需要緩存並且response沒有出錯,就把這次網絡請求的response寫入緩存

    if (request.shouldCache() && response.cacheEntry != null) {
        mCache.put(request.getCacheKey(), response.cacheEntry);
        request.addMarker("network-cache-written");
    }

最後把response傳遞給用戶即可

    mDelivery.postResponse(request, response);

如果網絡請求失敗,同樣也罷請求失敗的情況傳遞給用戶,讓用戶自己處理

    mDelivery.postError(request, volleyError);

RequestQueue的屬性分析

  /** Used for generating monotonically-increasing sequence numbers for requests. */
    private AtomicInteger mSequenceGenerator = new AtomicInteger();

添加進請求隊列的每一個請求都會創建全局唯一的一個序列號,使用AtomicInteger類型的原子整數保證多線程並發添加請求的情況下不會出現序列號的重復

    /**
     * Staging area for requests that already have a duplicate request in flight.
     *containsKey(cacheKey) indicates that there is a request in flight for the given cache key.
     get(cacheKey) returns waiting requests for the given cache key. The in flight request is not contained in that list. Is null if no requests are staged.
     */
    private final Map>> mWaitingRequests =
            new HashMap>>();

mWaitingRequests是一個Map,鍵是cache key,值是一個隊列,該隊列包含了所有等待針對cache key請求結果的的請求,即同一個url的重復請求隊列

    /**
     * The set of all requests currently being processed by this RequestQueue. A Request
     * will be in this set if it is waiting in any queue or currently being processed by
     * any dispatcher.
     */
    private final Set> mCurrentRequests = new HashSet>();

mCurrentRequests是一個集合,裡邊包含了所有正在進行中的請求。

    /** The cache triage queue. */
    private final PriorityBlockingQueue> mCacheQueue =
        new PriorityBlockingQueue>();

    /** The queue of requests that are actually going out to the network. */
    private final PriorityBlockingQueue> mNetworkQueue =
        new PriorityBlockingQueue>();

兩個優先級隊列,緩存請求隊列和網絡請求隊列

生產者添加請求

 public  Request add(Request request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

先把新添加的請求綁定到請求隊列,並把該請求添加到正在處理的請求的集合

        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

給新添加的請求設置一個序列號,序列號按照添加的順序遞增

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());

如果該請求被設置不緩存,那麼直接跳過緩存調度器,把該請求添加到網絡請求隊列即可。

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

然後判斷之前是否已經有針對此cache key的重復請求(相同url的請求),有的話直接放入該cache key的等待隊列(重復請求隊列),避免多次請求同一個url。

   if (mWaitingRequests.containsKey(cacheKey)) {
            // There is already a request in flight. Queue up.
            Queue> stagedRequests = mWaitingRequests.get(cacheKey);
            if (stagedRequests == null) {
                stagedRequests = new LinkedList>();
            }
            stagedRequests.add(request);
            mWaitingRequests.put(cacheKey, stagedRequests);
            }

如果不是重復的請求,這個請求是第一次添加到隊列的,那麼,設置該該請求的cache key的等待隊列(重復請求隊列)是null,將該請求添加進緩存隊列

    // Insert 'null' queue for this cacheKey, indicating there is now a request in
    // flight.
    mWaitingRequests.put(cacheKey, null);
    mCacheQueue.add(request);

線程切換,把請求結果遞交給用戶ResponseDelivery

ResponseDelivery是一個接口,有一個直接實現類ExecutorDelivery

public class ExecutorDelivery implements ResponseDelivery

    /** Used for posting responses, typically to the main thread. */
    private final Executor mResponsePoster;

    /**
     * Creates a new response delivery interface.
     * @param handler {@link Handler} to post responses on
     */
    public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

ExecutorDelivery構造函數需要傳入一個handler,通過handler達到線程切換的目的,如果傳入的handler綁定的是UI線程的Looper,那麼command任務將在UI線程被執行

    @Override
    public void postResponse(Request request, Response response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }

postResponse方法創建了一個ResponseDeliveryRunnable類型的任務傳遞給execute方法,導致ResponseDeliveryRunnable該任務將在UI線程被執行,ResponseDeliveryRunnable主要在UI線程做了什麼工作呢?

        public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }

            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }

這裡按照請求執行的結果分類進行了處理,如果該請求已經被取消了那麼直接調用該請求的finish方法然後直接返回

        if (mRequest.isCanceled()) {
            mRequest.finish("canceled-at-delivery");
            return;
        }

請求成功的話,調用mRequest的deliverResponse方法,請求失敗的話調用mRequest的deliverError方法

       // Deliver a normal response or error, depending.
       if (mResponse.isSuccess()) {
           mRequest.deliverResponse(mResponse.result);
       } else {
           mRequest.deliverError(mResponse.error);
       }

mRequest的類型Request,而Request是一個抽象的類型,實際運行中mRequest的類型將會是StringRequest、JsonRequest等等默認實現類或者用戶自己繼承Request實現的具體邏輯,充分體現了面向抽象編程,而不是面向具體編程。
現在知道了在主線程中會調用mRequest的deliverResponse方法(假定請求成功),以StringRequest為例,看看deliverResponse到底做了什麼

    @Override
    protected void deliverResponse(String response) {
        mListener.onResponse(response);
    }
    public StringRequest(int method, String url, Listener listener,
            ErrorListener errorListener) {
        super(method, url, errorListener);
        mListener = listener;
    }
}

可見在deliverResponse方法中調用了回調接口的onResponse方法,mListener通過StringRequest的構造函數通過用戶傳進來,這樣請求成功的話,用戶可以直接在實現的Listener的onResponse方法中獲取到請求成功的String結果。
從哪裡可以看出請求結果是通過handler傳遞給UI線程處理的呢,通過RequestQueue的構造器

    public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }
    public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

new ExecutorDelivery(new Handler(Looper.getMainLooper()))

通過Looper.getMainLooper()給handler綁定了UI線程的Looper,這樣請求結果就被切換到UI線程中傳遞給用戶.

  1. 上一頁:
  2. 下一頁:
熱門文章
閱讀排行版
Copyright © Android教程網 All Rights Reserved