Django Channels¶
Channels 是一個讓 Django 能夠處理更多 HTTP 請求的專案,包含 WebSockets 和 HTTP2, 及有能力在回應已經被送出時去執行程式碼類似縮圖或是背景計算。
這是一個很容易透過 Django 視圖模型去延伸理解,也很容易去整合與發佈。
首先,讀我們 Channels 的概念 文件先得到一個 Channels 數據底層模型的觀念與他們如何在 Django 內使用。
接著,研讀 Getting Started with Channels 開始了解如何只需要 30 行的程式碼透過 WebSockets 啟動與執行。
假如你希望快速的導覽,從 簡短說明 開始吧。
假如你有興趣做出些貢獻,請研讀我們的 貢獻 docs!
專案¶
Channels 來自於五個套件:
Channels,the Django integration layer
- Daphne, the HTTP and Websocket termination server
- asgiref, the base ASGI library/memory backend
- asgi_redis, the Redis channel backend
- asgi_rabbitmq, the RabbitMQ channel backend
- asgi_ipc, the POSIX IPC channel backend
這份文件包含系統得整體;可以從個別的儲藏庫找到個別發佈紀錄與說明。
主題¶
簡短說明¶
什麼是 Channels?¶
Channels extends Django to add a new layer that allows two important features:
- WebSocket handling, in a way very similar to normal views
背景測試,像其它的Django一樣在相同伺服器執行
這也允許其它物件,但你要從這些物件開始
How?¶
這將Django分成2種運行類型:
一個處理HTTP 和 WebSockets
- One that runs views, websocket handlers and background tasks (consumers)
They communicate via a protocol called ASGI, which is similar to WSGI but runs over a network and allows for more protocol types.
Channels does not introduce asyncio, gevent, or any other async code to your Django code; all of your business logic runs synchronously in a worker process or thread.
我必須改變我使用Django的方法?¶
No, all the new stuff is entirely optional. If you want it, however, you’ll change from running Django under a WSGI server, to running:
- An ASGI server, probably Daphne
- Django worker servers, using
manage.py runworker
- Something to route ASGI requests over, like Redis.
Even when you’re running on Channels, it routes all HTTP requests to the Django view system by default, so it works like before.
Channels還給會給我什麼?¶
其他特徵包括:
- Easy HTTP long-poll support for thousands of clients at once
- Full session and auth support for WebSockets
自動用戶登入主要應用於WebSockets的網站cookies
- Built-in primitives for mass triggering of events (chat, live blogs, etc.)
- Zero-downtime deployment with browsers paused while new workers spin up
- Optional low-level HTTP control on a per-URL basis
- Extendability to other protocols or event sources (e.g. WebRTC, raw UDP, SMS)
這可以縮放嗎?¶
Yes, you can run any number of protocol servers (ones that serve HTTP and WebSockets) and worker servers (ones that run your Django code) to fit your use case.
The ASGI spec allows a number of different channel layers to be plugged in between these two components, with different performance characteristics, and it’s designed to allow both easy sharding as well as the ability to run separate clusters with their own protocol and worker servers.
為什麼它不使用我的喜好排序訊息?¶
Channels is deliberately designed to prefer low latency (goal is a few milliseconds) and high throughput over guaranteed delivery, which doesn’t match some message queue designs.
Some features, like guaranteed ordering of messages, are opt-in as they incur a performance hit, but make it more message queue like.
Do I need to worry about making all my code async-friendly?¶
No, all your code runs synchronously without any sockets or event loops to block. You can use async code within a Django view or channel consumer if you like - for example, to fetch lots of URLs in parallel - but it doesn’t affect the overall deployed site.
What version of Django does it work with?¶
You can install Channels as a library for Django >= 1.8. It has a few
extra dependencies, but these will all be installed if you use pip
.
Official project¶
Channels is not in the Django core as initially planned, but it’s an official Django project since September 2016. More information about Channels being adopted as an official project are available on the Django blog.
What do I read next?¶
Start off by reading about the concepts underlying Channels, and then move on to read our example-laden Getting Started guide.
Channels 的概念¶
Django 的傳統做法圍繞著請求與回應;一個請求進來,Django 就被觸發並服務它,產生一個回應並送出,接著 Django 離開並且等待下一個請求。
當互聯網的運作方式只是簡單的瀏覽器交互,這是個好方法,但現代的網站包括了 WebSockets 和 HTTP2 Server Push 等這類的技術,它們讓網站可以在這種傳統式的循環之外進行溝通。
除此之外,還有許多非關鍵性的任務,是應用程式可以輕鬆的卸載直到有個回應被送出,例如把東西保存到快取或是為新上傳的圖片產生縮圖。
這些都改變了 Django 執行 “事件導向” 的方式 - 而非單純回應給請求,相反的 Django 回應各種事件並傳送到 channel 上。這些仍然沒有無法保持持久的狀態 - 每一種事件標頭,或是 消費者 我們稱之,是一種像是各自獨立呼叫的視圖方式。
讓我們先看看什麼是 channels。
什麼是 channel?¶
不令人意外的,核心系統必須是一個稱為資料結構的 channel。什麼是 channel? 它是一個有序列的,先進先出佇列,其中消息到期並且一次只向一個 listener 傳送。
你可以想像類似一個任務的佇列 - 生產者 將訊息傳到 channel,接著提供一個只能給某一位 消費者 監聽的 channel。
我們可以說 至少一次 一個消費者或是沒有人得到訊息 (我們這樣說,是假設這個 channel 發生 crash)。這個備選方案是 至少一次,會有一個消費者獲得消息,但它則會被發送到多個,當發生 crash 時。這不是我們想要的權衡方式。
這裡還有一些其它的限制 - 訊息通常被建立為序列的型態,保持在一定大小的限制 - 當你有高優先權的使用時你不需要擔心這些實行的細解。
channels 是具備容量的,所以許多生產者可以將大量消息寫入沒有消費者的 channel 中,消費者可以隨後再開始取得這些服務與佇列的訊息。
假如你使用 channels in Go: GO channels 和 Django 相似。但關鍵不同之處在 Django channels 是一種 network-transparent; 我們提供一種 channels 實現存取網路讓消費者與生產者可以執行在不同的行程或是不同的機器。
在網路內,我們定義名稱字串定義 channels 唯一性 - 你可以從任何機器連結同樣的 channel 後台然後傳送給任何名稱的 channel。假設兩個不同機器同時寫入 http.request
channel,他們會寫入同樣 channel。
我們如何使用 channels?¶
所以如何讓 Django 使用這些 channels? 在 Django 內你可以寫一個 consume to channel 的函式:
def my_consumer(message):
pass
接著在 channel 路由內指派一個 channel 給他:
channel_routing = {
"some-channel": "myapp.consumers.my_consumer",
}
這裡指對於所有在 channel 上訊息,Django 將會呼叫一個伴隨訊息勿件的消費者函式(訊息物件會有一個”內容”屬性,這個物件會一直是 dict 的資料,另一個 “channel” 屬性則是從哪裡來的 channel,以可以是同樣的)。
並不是讓 Django 運作在傳統的 request-response 模式,Channels 改變 Django 使其可以運作在一個 worker mode - 它可以透過消費指的指派去監聽所有的 channels,當訊息抵達時,相關消費者才執行。因此和在 WSGI server 上單一行程不同,Django 分在三個獨立的 layer 中執行:
介面服務,做為 Django 與外面世界的溝通。它包含一個 WSGI adapter 像是一個 separate WebSocket server - 在後面介紹。
channel 後端,用來組合插入的 python 程式碼和一個資料庫 (e.g. Redis, or shared memory segment) 負責傳輸消息。
workers,監聽所有相關的 channels,當訊息準備好時執行消費者程式碼。
這看起來相對簡單,但這是設計的一部分; 而不是嘗試並擁有完整的異步架構,我們只是引入了一個比 Django 視圖呈現的更複雜的抽象。
一個視圖提供一個請求與回傳一個回應;一個消費者帶來一個 channel 訊息與寫出一個 0 到 其他更多的 channel 訊息。
現在讓我們針對 requests 建立一個 channel (稱為 http.request
),與一個針對每一個客戶端回應的 channel (e.g. http.response.04F2h2Fd
),其中回應 channel 是一個請求訊息的屬性(reply_channel
)。馬上,一個視圖僅為其他消費者的一例:
# Listens on http.request
def my_consumer(message):
# Decode the request from message format to a Request object
django_request = AsgiRequest(message)
# Run view
django_response = view(django_request)
# Encode the response into message format
for chunk in AsgiHandler.encode_response(django_response):
message.reply_channel.send(chunk)
實際上,這是 Channels 如何運作。界面服務會將對應的介面(HTTP, WebSocket, etc.)轉換連結到對應訊息,接著你會編寫 worker 處理這些訊息。通常你離開一個正常 HTTP 升級成 Django 的內置消費者並且嵌入視圖/模板系統,但你可以用複寫方式去增加功能假如你願意。
然而,關鍵的部分是你可以在任何 event 回應時執行程式碼(接著可以在 channels 送出) - 且包含你自己所創建的。你可以在 model 儲存,在其他訊息進入時或是當其他從程式碼路徑進入 views 或是 forms 時觸發。這個方法對於 push-style 的程式碼很有用 -在那使用 WebSockets 或 HTTP long-polling 時通知客戶的更改(聊天中的消息,或者在管理員的實時更新作為另一個用戶編輯的東西)。
Channel 類型¶
這裡有兩個 channels 實際上的主要有兩種用途。第一,且是比較明顯的一種是分派工作給消費者 - 一個訊息被得到與新增到 channel, 接著任何一個 worker 可以取得並且執行消費者。
第二種通道用途是用於回覆。值得注意是他們只有做一件事就是監聽 -介面服務。每一個回應的 channel 是各自獨立的名稱且當其 client 端被終止,必須路由回界面服務。
這不是巨大差異 - 他們能然根據核心定義 channel 行為 - 但當我們想擴大規模時會出現一些問題。我們可以愉快的根據叢集隨機附載平衡服務正常的 channels 和 workers - 最終,任何 worker 可以處理訊息 - 但回應 channels 必須傳送訊息到它們正在監聽的 channel 服務。
對於這個理由,Channels 對此區分出兩種不同類型的 channel 型態,且通過一個包含 !
的字符名稱來表示一個 回應 channel。 -e.g. http.response!f5G3fE21f
。一般 channels 不會包含它,但是會與其他休息中的回覆 channel 名稱一起,它們通常包含字符 a-z A-Z 0-9 - _
,且必須少於 200 字符的長度。
這裡可以用選擇後端實現來理解他 - 畢竟,這只對於 Scale 重要,因為這邊你想要分割兩種不同類型 — 但是它仍然存在。假如你是撰寫後端或是介面服務想要更多彈性與掌控 channel types 可以參考 Scaling Up。
群組¶
因為 channels 只能傳送到單一個 listener 無法做廣播;假如你希望傳送一個訊息給任意的終端群組,你需要對發送的 channels 的回覆保持追蹤。
假設我有一個實況部落格,當有一個新的 post 儲存了,我希望推送出去更新,我可以針對 post_save
訊號註冊一個標頭並且保持一組 channels (這裡,使用 Redis) 去送出一個更新:
redis_conn = redis.Redis("localhost", 6379)
@receiver(post_save, sender=BlogUpdate)
def send_update(sender, instance, **kwargs):
# Loop through all reply channels and send the update
for reply_channel in redis_conn.smembers("readers"):
Channel(reply_channel).send({
"text": json.dumps({
"id": instance.id,
"content": instance.content
})
})
# Connected to websocket.connect
def ws_connect(message):
# Add to reader set
redis_conn.sadd("readers", message.reply_channel.name)
雖然這樣可以運作,但有一個小的問題 - 當他們斷線時我們無法從這個 readers
設定移除連接。我們可以增加一個消費者,它可以透過監聽 websocket.disconnect
來處理,但我們也會需要在介面服務有一些到期類別被迫退出或失去電源,然後才能發送斷開信號 - 你的程式碼將永遠不會看見任何斷線的提示,但 reply channel 是一個完全無效的訊息,你發送到那邊的東西將會停留直到過期。
因為這個 channels 的基礎設計是無狀態的,假設 channel 的介面服務消失 channel server 沒有任何 “closing” 概念 - 畢竟,channel 意味著保留訊息直到一個消費者來臨(某些介面服務的類別, e.g. 一個 SMS 閘道,理論上可以服務從任意的介面服務的任何終端)。
我們不特別關心一個斷線的 client 沒有取得發送群組的訊息 - 畢竟它已經斷線 - 但是我們關心睹塞通道後端追蹤那些已經不再存在的 client (也可能在回覆 channel 發生衝突和發送不具意義的訊息,雖然有可能是在幾週之後)
現在,我們可以回到上面的範例並且添加一個過期的集合並且持續追蹤直到一個到期時間,但什麼才是一個讓你增加程式碼到 boilerplate 的模板架構呢? 相反,Channels 改善這個一個核心的抽象概念稱為 Goups:
@receiver(post_save, sender=BlogUpdate)
def send_update(sender, instance, **kwargs):
Group("liveblog").send({
"text": json.dumps({
"id": instance.id,
"content": instance.content
})
})
# Connected to websocket.connect
def ws_connect(message):
# Add to reader group
Group("liveblog").add(message.reply_channel)
# Accept the connection request
message.reply_channel.send({"accept": True})
# Connected to websocket.disconnect
def ws_disconnect(message):
# Remove from reader group on clean disconnect
Group("liveblog").discard(message.reply_channel)
現在 do groups 不僅有他們自己的 send()
方法(後端可以提供有效的實現),它們同樣可以自動化的管理到期的群組成員 - 當這個 channel 開始有訊息時直到未被消費且到期時,我們進入所有群組並且移除這些訊息。當然,假如可以你仍然應該移除群組在斷開連線時; 因為某些原因,斷線時訊息沒有辦法成功傳送,斷開連線的程式碼是抓取這個例外。
Groups 一般來說對於回應 channels 是有用的(包含字符 !
), 假如你想將他們使用於一般的 channels 是可行的,因為它們都是唯一的客戶端。
下一步¶
這是一個高級的 channels 和 groups 概覽與如何開始思考它。記住,Django 提供一些 channels 但你自由的使用與消費,所有的 channels 都是 network-transparent
有件事是 channels 不是保證渠道的交付。假如你需要確定是一個將被完成任務,使用一個為此設計的系統設定去重試與保持(e.g. Celery),或是做出一個管理命令,假如檢查沒有完成,會重新送出一個訊息給 channel (自己動手去重試這個邏輯)。
我們將在文檔的其餘部分更詳細地介紹什麼樣的任務適合用在 Channels 中,但現在讓我們進入 Getting Started with Channels 並編寫一些程式碼。
安裝¶
Channels 可以從 PyPI 上取得,執行以下指令來安裝:
pip install -U channels
安裝完成之後,需要新增 channels
到 INSTALLED_APPS
的設定中:
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
...
'channels',
)
這樣子就完成了!啟用之後, channels
將會整合到 Django 當中,並且控制 runserver
指令。詳細請見 Getting Started with Channels 的說明。
安裝最新的開發中版本¶
要安裝 Channels 的最新開發中版本,使用 git clone 將專案複製下來,並切換進該專案目錄,然後使用 pip install 來安裝到您目前的虛擬環境之中:
$ git clone git@github.com:django/channels.git
$ cd channels
$ <activate your project’s virtual environment>
(environment) $ pip install -e . # the dot specifies the current repo
Getting Started with Channels¶
(If you haven’t yet, make sure you install Channels)
現在,讓我們開始寫一些 consumer。假如尚未讀過 Channels 的概念 先研讀過後,它包含最基礎的像是那些是 channels 與群組、重要的實施模式佈局與注意事項。
First Consumers¶
當你第一次執行 Django 並安裝 Channels,將會設定默認的 layout - 所有的 HTTP requests (在 http.request
channel) 如何路由到 Django - 與過去基於 WSGI-based Django 與你的視覺圖與靜態檔案服務 (依然運作如一般 runserver
不會有任何的不同)
做為一個非常基礎的介紹,我們即將撰寫一個 consumer 覆寫的內置處理器,直接處理每一個 HTTP request 的需求。你不會經常在專案內這樣做,但是這很棒的說明 channels 如何成為 Django 的核心 - 他不是增加一個新的 addition 而是一個全新的 layer 建築在既有的視圖 view 上。
現在,建一個新的專案以及一個新的 app, 並將這些放到一個 app:: 裡的 consumers.py
檔案。
from django.http import HttpResponse
from channels.handler import AsgiHandler
def http_consumer(message):
# Make standard HTTP response - access ASGI path attribute directly
response = HttpResponse("Hello world! You asked for %s" % message.content['path'])
# Encode that response into message format (ASGI)
for chunk in AsgiHandler.encode_response(response):
message.reply_channel.send(chunk)
在這裡很重要且必須注意的事,因為我們送出的訊息必須是 JSON 可序列化,這個 request 與 response 訊息是一組鍵值對的形式。你可以讀取更多關於這些格式在 ASGI specification, 但不需要擔心太多;只需要知道這些是一個 AsgiRequest
class 用來轉換 ASGI 到 Django request 物件,AsgiHandler
class 負責轉換 HttpResponse
進入 ASGI 訊息,這一切就如你所見到上面使用的。通常當你使用一般的視圖時,Django 的內置處理器程式碼將會完成這些。
現在我們需要再做一件事,那就是告訴 Django 這是 Consumer 應該要被綁定在 http.request
channel 而不是 Django 預設的視圖系統。這透過修改 settings 檔案完成 - 較為特別是我們需要定義我們的 預設
channel layer 和路由設置。
Channel 路由有點像是 URL 路由,因此它的結構類似 - 你透過 dict 指定這個映射的設定,映射 channels 到 Consumer 的可調用。他的樣子會像是:
# In settings.py
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgiref.inmemory.ChannelLayer",
"ROUTING": "myproject.routing.channel_routing",
},
}
# In routing.py
from channels.routing import route
channel_routing = [
route("http.request", "myapp.consumers.http_consumer"),
]
警告
這裡的範例,和大部分這邊的範例,使用 “In memory” channel layer。這是簡單的開始與提供絕對沒有跨進程的通道傳輸,所以只能使用在 runserver
環境下。在佈署環境下,你需要選擇另一個後端 (稍後討論) 來運行。
如你所見,這有點像是 Django 的 DATABASES
設定; 被命名的 channel layers 有個預設通道被命名為 default
。 每一個 layer 需要一個 channel layer 類別,一些選項 ( 假如 channel layer 需要他們的話 ) 以及然後一個路由方案,其指向包含路由設置的列表。建議你在項目中稱這個檔案為 routing.py
並把它和 urls.py
放在一起,但是只要路徑是正確的,你可以把它放在任何你喜歡的地方。
假如你透過 python manage.py runserver
來啟動然後連接到 http://localhost:8000
就可以成功的瀏覽到 Hello World 頁面而不是預設的 Django 回應頁面,代表可以運作。假如你沒有得到回應,請參考 installed Channels correctly 。
現在,這還不夠酷 - Django 本來就以已經能夠處理原生的 HTTP 回覆。讓我們嘗試一些 WebSockets 並且做一些基礎的聊天服務。
我們將啟動一個簡單的服務,只用來是回覆它發送回同一個客戶端的每條消息 - 沒有跨客戶通信。 他不一定非常實用,但它是一個好可開始去撰寫 Channels consumers。
刪除之前的消費者與他的路由 - 從現在開始我們希望是一個一般的 Django 視圖層去服務 HTTP 請求,會有狀況發生假如你沒有指定一個針對 http.request
給消費者 - 並且建立一個 WebSocket 顧客替代:
# In consumers.py
def ws_message(message):
# ASGI WebSocket packet-received and send-packet message types
# both have a "text" key for their textual data.
message.reply_channel.send({
"text": message.content['text'],
})
掛勾它到 websocket.receive
channel 如下
# In routing.py
from channels.routing import route
from myapp.consumers import ws_message
channel_routing = [
route("websocket.receive", ws_message),
]
現在讓我們看看它們在做些什麼。它綑綁在 websocket.receice
channel,這意味著從 WebSocket packet 由客戶端發送給我們時它將收到一個訊息。
當它得到這個訊息時,他會得從而到 reply_channel
屬性,他是從 client 端得到唯一的 channel 回應,接著使用 send()
方法送出一些內容返回 client。
Let’s test it! Run runserver
, open a browser, navigate to a page on the server
(you can’t use any page’s console because of origin restrictions), and put the
following into the JavaScript console to open a WebSocket and send some data
down it (you might need to change the socket address if you’re using a
development VM or similar)
// Note that the path doesn't matter for routing; any WebSocket
// connection gets bumped over to WebSocket consumers
socket = new WebSocket("ws://" + window.location.host + "/chat/");
socket.onmessage = function(e) {
alert(e.data);
}
socket.onopen = function() {
socket.send("hello world");
}
// Call onopen directly if socket is already open
if (socket.readyState == WebSocket.OPEN) socket.onopen();
你應該看到一個提示立刻回傳並且說 “hello world” - 你的訊息已經被往返透過伺服器並且回覆去觸發這個提示。
群組¶
現在讓我們建立我們的 echo 伺服器進入一個實際的聊天伺服器,所以人們可以和其他人彼此交談。要完成這個我們將使用 Groups,其中一個 :doc”core concepts<concepts> of Channels,接著我們的基本方式是做多播消息。
要完成這個,我們將 hook up websocket.connect
與 websocket.disconnect
channels 去新增與移除我們的 clients 從 Group 當他們連接或是中斷,像是:
# In consumers.py
from channels import Group
# Connected to websocket.connect
def ws_add(message):
# Accept the incoming connection
message.reply_channel.send({"accept": True})
# Add them to the chat group
Group("chat").add(message.reply_channel)
# Connected to websocket.disconnect
def ws_disconnect(message):
Group("chat").discard(message.reply_channel)
備註
You need to explicitly accept WebSocket connections if you override connect
by sending accept: True
- you can also reject them at connection time,
before they open, by sending close: True
.
當然,如果你已經讀完 Channels 的概念 ,則你知道加到 groups 中的 channels 只要他們的訊息過期,則 channel 也跟著過期。 ( 每一個 channel layer 都有訊息的過期時限,通常是 30 秒至數分鐘,通常可以設定。 ) - 但是 disconnect
handler 在任何時間都可以被呼叫。
備註
Channel 的設計會期望他可以在傳訊息失敗時繼續工作。它假設少數的訊息可能不會被成功的發送出去,因此,其核心的功能在設計上會預期有失敗的發生,所以當訊息沒有被送出去的時候,不會造成系統崩潰。
We suggest you design your applications the same way - rather than relying on 100% guaranteed delivery, which Channels won’t give you, look at each failure case and program something to expect and handle it - be that retry logic, partial content handling, or just having something not work that one time. HTTP requests are just as fallible, and most people’s response to that is a generic error page!
Now, that’s taken care of adding and removing WebSocket send channels for the
chat
group; all we need to do now is take care of message sending. Instead
of echoing the message back to the client like we did above, we’ll instead send
it to the whole Group
, which means any client who’s been added to it will
get the message. Here’s all the code:
# In consumers.py
from channels import Group
# Connected to websocket.connect
def ws_add(message):
# Accept the connection
message.reply_channel.send({"accept": True})
# Add to the chat group
Group("chat").add(message.reply_channel)
# Connected to websocket.receive
def ws_message(message):
Group("chat").send({
"text": "[user] %s" % message.content['text'],
})
# Connected to websocket.disconnect
def ws_disconnect(message):
Group("chat").discard(message.reply_channel)
And what our routing should look like in routing.py
:
from channels.routing import route
from myapp.consumers import ws_add, ws_message, ws_disconnect
channel_routing = [
route("websocket.connect", ws_add),
route("websocket.receive", ws_message),
route("websocket.disconnect", ws_disconnect),
]
Note that the http.request
route is no longer present - if we leave it
out, then Django will route HTTP requests to the normal view system by default,
which is probably what you want. Even if you have a http.request
route that
matches just a subset of paths or methods, the ones that don’t match will still
fall through to the default handler, which passes it into URL routing and the
views.
With all that code, you now have a working set of a logic for a chat server.
Test time! Run runserver
, open a browser and use that same JavaScript
code in the developer console as before
// Note that the path doesn't matter right now; any WebSocket
// connection gets bumped over to WebSocket consumers
socket = new WebSocket("ws://" + window.location.host + "/chat/");
socket.onmessage = function(e) {
alert(e.data);
}
socket.onopen = function() {
socket.send("hello world");
}
// Call onopen directly if socket is already open
if (socket.readyState == WebSocket.OPEN) socket.onopen();
You should see an alert come back immediately saying “hello world” - but this
time, you can open another tab and do the same there, and both tabs will
receive the message and show an alert. Any incoming message is sent to the
chat
group by the ws_message
consumer, and both your tabs will have
been put into the chat
group when they connected.
Feel free to put some calls to print
in your handler functions too, if you
like, so you can understand when they’re called. You can also use pdb
and
other similar methods you’d use to debug normal Django projects.
Running with Channels¶
Because Channels takes Django into a multi-process model, you no longer run everything in one process along with a WSGI server (of course, you’re still free to do that if you don’t want to use Channels). Instead, you run one or more interface servers, and one or more worker servers, connected by that channel layer you configured earlier.
There are multiple kinds of “interface servers”, and each one will service a different type of request - one might do both WebSocket and HTTP requests, while another might act as an SMS message gateway, for example.
These are separate from the “worker servers” where Django will run actual logic, though, and so the channel layer transports the content of channels across the network. In a production scenario, you’d usually run worker servers as a separate cluster from the interface servers, though of course you can run both as separate processes on one machine too.
By default, Django doesn’t have a channel layer configured - it doesn’t need one to run normal WSGI requests, after all. As soon as you try to add some consumers, though, you’ll need to configure one.
In the example above we used the in-memory channel layer implementation
as our default channel layer. This just stores all the channel data in a dict
in memory, and so isn’t actually cross-process; it only works inside
runserver
, as that runs the interface and worker servers in different threads
inside the same process. When you deploy to production, you’ll need to
use a channel layer like the Redis backend asgi_redis
that works cross-process;
see 通道層類型 for more.
The second thing, once we have a networked channel backend set up, is to make
sure we’re running an interface server that’s capable of serving WebSockets.
To solve this, Channels comes with daphne
, an interface server
that can handle both HTTP and WebSockets at the same time, and then ties this
in to run when you run runserver
- you shouldn’t notice any difference
from the normal Django runserver
, though some of the options may be a little
different.
(Under the hood, runserver is now running Daphne in one thread and a worker with autoreload in another - it’s basically a miniature version of a deployment, but all in one process)
Let’s try out the Redis backend - Redis runs on pretty much every machine, and
has a very small overhead, which makes it perfect for this kind of thing. Install
the asgi_redis
package using pip
.
pip install asgi_redis
and set up your channel layer like this:
# In settings.py
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
"ROUTING": "myproject.routing.channel_routing",
},
}
You’ll also need to install the Redis server - there are downloads available for Mac OS and Windows, and it’s in pretty much every linux distribution’s package manager. For example, on Ubuntu, you can just:
sudo apt-get install redis-server
Fire up runserver
, and it’ll work as before - unexciting, like good
infrastructure should be. You can also try out the cross-process nature; run
these two commands in two terminals:
manage.py runserver --noworker
manage.py runworker
As you can probably guess, this disables the worker threads in runserver
and handles them in a separate process. You can pass -v 2
to runworker
if you want to see logging as it runs the consumers.
If Django is in debug mode (DEBUG=True
), then runworker
will serve
static files, as runserver
does. Just like a normal Django setup, you’ll
have to set up your static file serving for when DEBUG
is turned off.
Persisting Data¶
Echoing messages is a nice simple example, but it’s ignoring the real
need for a system like this - persistent state for connections.
Let’s consider a basic chat site where a user requests a chat room upon initial
connection, as part of the query string (e.g. wss://host/websocket?room=abc
).
The reply_channel
attribute you’ve seen before is our unique pointer to the
open WebSocket - because it varies between different clients, it’s how we can
keep track of “who” a message is from. Remember, Channels is network-transparent
and can run on multiple workers, so you can’t just store things locally in
global variables or similar.
Instead, the solution is to persist information keyed by the reply_channel
in
some other data store - sound familiar? This is what Django’s session framework
does for HTTP requests, using a cookie as the key. Wouldn’t it be useful if
we could get a session using the reply_channel
as a key?
Channels provides a channel_session
decorator for this purpose - it
provides you with an attribute called message.channel_session
that acts
just like a normal Django session.
Let’s use it now to build a chat server that expects you to pass a chatroom name in the path of your WebSocket request (we’ll ignore auth for now - that’s next):
# In consumers.py
from channels import Group
from channels.sessions import channel_session
# Connected to websocket.connect
@channel_session
def ws_connect(message):
# Accept connection
message.reply_channel.send({"accept": True})
# Work out room name from path (ignore slashes)
room = message.content['path'].strip("/")
# Save room in session and add us to the group
message.channel_session['room'] = room
Group("chat-%s" % room).add(message.reply_channel)
# Connected to websocket.receive
@channel_session
def ws_message(message):
Group("chat-%s" % message.channel_session['room']).send({
"text": message['text'],
})
# Connected to websocket.disconnect
@channel_session
def ws_disconnect(message):
Group("chat-%s" % message.channel_session['room']).discard(message.reply_channel)
Update routing.py
as well:
# in routing.py
from channels.routing import route
from myapp.consumers import ws_connect, ws_message, ws_disconnect
channel_routing = [
route("websocket.connect", ws_connect),
route("websocket.receive", ws_message),
route("websocket.disconnect", ws_disconnect),
]
If you play around with it from the console (or start building a simple JavaScript chat client that appends received messages to a div), you’ll see that you can set a chat room with the initial request.
Authentication¶
Now, of course, a WebSocket solution is somewhat limited in scope without the ability to live with the rest of your website - in particular, we want to make sure we know what user we’re talking to, in case we have things like private chat channels (we don’t want a solution where clients just ask for the right channels, as anyone could change the code and just put in private channel names)
It can also save you having to manually make clients ask for what they want to see; if I see you open a WebSocket to my “updates” endpoint, and I know which user you are, I can just auto-add that channel to all the relevant groups (mentions of that user, for example).
Handily, as WebSockets start off using the HTTP protocol, they have a lot of familiar features, including a path, GET parameters, and cookies. We’d like to use these to hook into the familiar Django session and authentication systems; after all, WebSockets are no good unless we can identify who they belong to and do things securely.
In addition, we don’t want the interface servers storing data or trying to run authentication; they’re meant to be simple, lean, fast processes without much state, and so we’ll need to do our authentication inside our consumer functions.
Fortunately, because Channels has an underlying spec for WebSockets and other messages (ASGI), it ships with decorators that help you with both authentication and getting the underlying Django session (which is what Django authentication relies on).
Channels can use Django sessions either from cookies (if you’re running your
websocket server on the same domain as your main site, using something like Daphne),
or from a session_key
GET parameter, which works if you want to keep
running your HTTP requests through a WSGI server and offload WebSockets to a
second server process on another domain.
You get access to a user’s normal Django session using the http_session
decorator - that gives you a message.http_session
attribute that behaves
just like request.session
. You can go one further and use http_session_user
which will provide a message.user
attribute as well as the session attribute.
Now, one thing to note is that you only get the detailed HTTP information
during the connect
message of a WebSocket connection (you can read more
about that in the ASGI spec) - this means we’re not
wasting bandwidth sending the same information over the wire needlessly.
This also means we’ll have to grab the user in the connection handler and then
store it in the session; thankfully, Channels ships with both a channel_session_user
decorator that works like the http_session_user
decorator we mentioned above but
loads the user from the channel session rather than the HTTP session,
and a function called transfer_user
which replicates a user from one session
to another. Even better, it combines all of these into a channel_session_user_from_http
decorator.
Bringing that all together, let’s make a chat server where users can only chat to people with the same first letter of their username:
# In consumers.py
from channels import Channel, Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
# Connected to websocket.connect
@channel_session_user_from_http
def ws_add(message):
# Accept connection
message.reply_channel.send({"accept": True})
# Add them to the right group
Group("chat-%s" % message.user.username[0]).add(message.reply_channel)
# Connected to websocket.receive
@channel_session_user
def ws_message(message):
Group("chat-%s" % message.user.username[0]).send({
"text": message['text'],
})
# Connected to websocket.disconnect
@channel_session_user
def ws_disconnect(message):
Group("chat-%s" % message.user.username[0]).discard(message.reply_channel)
If you’re just using runserver
(and so Daphne), you can just connect
and your cookies should transfer your auth over. If you were running WebSockets
on a separate domain, you’d have to remember to provide the
Django session ID as part of the URL, like this
socket = new WebSocket("ws://127.0.0.1:9000/?session_key=abcdefg");
You can get the current session key in a template with {{ request.session.session_key }}
.
Note that this can’t work with signed cookie sessions - since only HTTP
responses can set cookies, it needs a backend it can write to to separately
store state.
Security¶
Unlike AJAX requests, WebSocket requests are not limited by the Same-Origin policy. This means you don’t have to take any extra steps when you have an HTML page served by host A containing JavaScript code wanting to connect to a WebSocket on Host B.
While this can be convenient, it also implies that by default any third-party
site can connect to your WebSocket application. When you are using the
http_session_user
or the channel_session_user_from_http
decorator, this
connection would be authenticated.
The WebSocket specification requires browsers to send the origin of a WebSocket
request in the HTTP header named Origin
, but validating that header is left
to the server.
You can use the decorator channels.security.websockets.allowed_hosts_only
on a websocket.connect
consumer to only allow requests originating
from hosts listed in the ALLOWED_HOSTS
setting:
# In consumers.py
from channels import Channel, Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
from channels.security.websockets import allowed_hosts_only.
# Connected to websocket.connect
@allowed_hosts_only
@channel_session_user_from_http
def ws_add(message):
# Accept connection
...
Requests from other hosts or requests with missing or invalid origin header are now rejected.
The name allowed_hosts_only
is an alias for the class-based decorator
AllowedHostsOnlyOriginValidator
, which inherits from
BaseOriginValidator
. If you have custom requirements for origin validation,
create a subclass and overwrite the method
validate_origin(self, message, origin)
. It must return True when a message
should be accepted, False otherwise.
Routing¶
The routing.py
file acts very much like Django’s urls.py
, including the
ability to route things to different consumers based on path
, or any other
message attribute that’s a string (for example, http.request
messages have
a method
key you could route based on).
Much like urls, you route using regular expressions; the main difference is that
because the path
is not special-cased - Channels doesn’t know that it’s a URL -
you have to start patterns with the root /
, and end includes without a /
so that when the patterns combine, they work correctly.
Finally, because you’re matching against message contents using keyword arguments, you can only use named groups in your regular expressions! Here’s an example of routing our chat from above:
http_routing = [
route("http.request", poll_consumer, path=r"^/poll/$", method=r"^POST$"),
]
chat_routing = [
route("websocket.connect", chat_connect, path=r"^/(?P<room>[a-zA-Z0-9_]+)/$"),
route("websocket.disconnect", chat_disconnect),
]
routing = [
# You can use a string import path as the first argument as well.
include(chat_routing, path=r"^/chat"),
include(http_routing),
]
The routing is resolved in order, short-circuiting around the
includes if one or more of their matches fails. You don’t have to start with
the ^
symbol - we use Python’s re.match
function, which starts at the
start of a line anyway - but it’s considered good practice.
When an include matches part of a message value, it chops off the bit of the value it matched before passing it down to its routes or sub-includes, so you can put the same routing under multiple includes with different prefixes if you like.
Because these matches come through as keyword arguments, we could modify our consumer above to use a room based on URL rather than username:
# Connected to websocket.connect
@channel_session_user_from_http
def ws_add(message, room):
# Add them to the right group
Group("chat-%s" % room).add(message.reply_channel)
# Accept the connection request
message.reply_channel.send({"accept": True})
In the next section, we’ll change to sending the room
as a part of the
WebSocket message - which you might do if you had a multiplexing client -
but you could use routing there as well.
Models¶
So far, we’ve just been taking incoming messages and rebroadcasting them to other clients connected to the same group, but this isn’t that great; really, we want to persist messages to a datastore, and we’d probably like to be able to inject messages into chatrooms from things other than WebSocket client connections (perhaps a built-in bot, or server status messages).
Thankfully, we can just use Django’s ORM to handle persistence of messages and easily integrate the send into the save flow of the model, rather than the message receive - that way, any new message saved will be broadcast to all the appropriate clients, no matter where it’s saved from.
We’ll even take some performance considerations into account: We’ll make our
own custom channel for new chat messages and move the model save and the chat
broadcast into that, meaning the sending process/consumer can move on
immediately and not spend time waiting for the database save and the
(slow on some backends) Group.send()
call.
Let’s see what that looks like, assuming we
have a ChatMessage model with message
and room
fields:
# In consumers.py
from channels import Channel
from channels.sessions import channel_session
from .models import ChatMessage
# Connected to chat-messages
def msg_consumer(message):
# Save to model
room = message.content['room']
ChatMessage.objects.create(
room=room,
message=message.content['message'],
)
# Broadcast to listening sockets
Group("chat-%s" % room).send({
"text": message.content['message'],
})
# Connected to websocket.connect
@channel_session
def ws_connect(message):
# Work out room name from path (ignore slashes)
room = message.content['path'].strip("/")
# Save room in session and add us to the group
message.channel_session['room'] = room
Group("chat-%s" % room).add(message.reply_channel)
# Accept the connection request
message.reply_channel.send({"accept": True})
# Connected to websocket.receive
@channel_session
def ws_message(message):
# Stick the message onto the processing queue
Channel("chat-messages").send({
"room": message.channel_session['room'],
"message": message['text'],
})
# Connected to websocket.disconnect
@channel_session
def ws_disconnect(message):
Group("chat-%s" % message.channel_session['room']).discard(message.reply_channel)
Update routing.py
as well:
# in routing.py
from channels.routing import route
from myapp.consumers import ws_connect, ws_message, ws_disconnect, msg_consumer
channel_routing = [
route("websocket.connect", ws_connect),
route("websocket.receive", ws_message),
route("websocket.disconnect", ws_disconnect),
route("chat-messages", msg_consumer),
]
Note that we could add messages onto the chat-messages
channel from anywhere;
inside a View, inside another model’s post_save
signal, inside a management
command run via cron
. If we wanted to write a bot, too, we could put its
listening logic inside the chat-messages
consumer, as every message would
pass through it.
Enforcing Ordering¶
There’s one final concept we want to introduce you to before you go on to build sites with Channels - consumer ordering.
Because Channels is a distributed system that can have many workers, by default
it just processes messages in the order the workers get them off the queue.
It’s entirely feasible for a WebSocket interface server to send out two
receive
messages close enough together that a second worker will pick
up and start processing the second message before the first worker has
finished processing the first.
This is particularly annoying if you’re storing things in the session in the
one consumer and trying to get them in the other consumer - because
the connect
consumer hasn’t exited, its session hasn’t saved. You’d get the
same effect if someone tried to request a view before the login view had finished
processing, of course, but HTTP requests usually come in a bit slower from clients.
Channels has a solution - the enforce_ordering
decorator. All WebSocket
messages contain an order
key, and this decorator uses that to make sure that
messages are consumed in the right order. In addition, the connect
message
blocks the socket opening until it’s responded to, so you are always guaranteed
that connect
will run before any receives
even without the decorator.
The decorator uses channel_session
to keep track of what numbered messages
have been processed, and if a worker tries to run a consumer on an out-of-order
message, it raises the ConsumeLater
exception, which puts the message
back on the channel it came from and tells the worker to work on another message.
There’s a high cost to using enforce_ordering
, which is why it’s an optional
decorator. Here’s an example of it being used:
# In consumers.py
from channels import Channel, Group
from channels.sessions import channel_session, enforce_ordering
from channels.auth import channel_session_user, channel_session_user_from_http
# Connected to websocket.connect
@channel_session_user_from_http
def ws_add(message):
# This doesn't need a decorator - it always runs separately
message.channel_session['sent'] = 0
# Add them to the right group
Group("chat").add(message.reply_channel)
# Accept the socket
message.reply_channel.send({"accept": True})
# Connected to websocket.receive
@enforce_ordering
@channel_session_user
def ws_message(message):
# Without enforce_ordering this wouldn't work right
message.channel_session['sent'] = message.channel_session['sent'] + 1
Group("chat").send({
"text": "%s: %s" % (message.channel_session['sent'], message['text']),
})
# Connected to websocket.disconnect
@channel_session_user
def ws_disconnect(message):
Group("chat").discard(message.reply_channel)
Generally, the performance (and safety) of your ordering is tied to your
session backend’s performance. Make sure you choose a session backend wisely
if you’re going to rely heavily on enforce_ordering
.
Next Steps¶
That covers the basics of using Channels; you’ve seen not only how to use basic channels, but also seen how they integrate with WebSockets, how to use groups to manage logical sets of channels, and how Django’s session and authentication systems easily integrate with WebSockets.
We recommend you read through the rest of the reference documentation to see more about what you can do with channels; in particular, you may want to look at our 部署 documentation to get an idea of how to design and run apps in production environments.
部署¶
部署使用 channels 的應用比起一般的 Django WSGI 需要多幾個步驟,但如何部署你有幾個選項與通過 channels 的 channel 流量。
首先,記住這是一個 Django 內的可選項。假使你離開一個預設的設定 (no CHANNEL_LAYERS
) 的專案,它將會執行和運作像是一般的 WSGI app。
當你想在作業上啟用 channels,你需要做這 3 件事情:
設一個 channel 後端
執行使用者伺服器
執行介面伺服器
You can set things up in one of two ways; either route all traffic through a HTTP/WebSocket interface server, removing the need to run a WSGI server at all; or, just route WebSockets and long-poll HTTP connections to the interface server, and leave other pages served by a standard WSGI server.
Routing all traffic through the interface server lets you have WebSockets and long-polling coexist in the same URL tree with no configuration; if you split the traffic up, you’ll need to configure a webserver or layer 7 loadbalancer in front of the two servers to route requests to the correct place based on path or domain. Both methods are covered below.
設定一個 channel 後端¶
The first step is to set up a channel backend. If you followed the
Getting Started with Channels guide, you will have ended up using the in-memory
backend, which is useful for runserver
, but as it only works inside the
same process, useless for actually running separate worker and interface
servers.
Instead, take a look at the list of 通道層類型, and choose one that fits your requirements (additionally, you could use a third-party pluggable backend or write your own - that page also explains the interface and rules a backend has to follow).
Typically a channel backend will connect to one or more central servers that
serve as the communication layer - for example, the Redis backend connects
to a Redis server. All this goes into the CHANNEL_LAYERS
setting;
here’s an example for a remote Redis server:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis-server-name", 6379)],
},
"ROUTING": "my_project.routing.channel_routing",
},
}
使用Redis後端,你必須安裝它:
pip install -U asgi_redis
Some backends, though, don’t require an extra server, like the IPC backend,
which works between processes on the same machine but not over the network
(it’s available in the asgi_ipc
package):
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_ipc.IPCChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"prefix": "mysite",
},
},
}
Make sure the same settings file is used across all your workers and interface servers; without it, they won’t be able to talk to each other and things will just fail to work.
If you prefer to use RabbitMQ layer, please refer to its documentation. Usually your config will end up like this:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_rabbitmq.RabbitmqChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"url": "amqp://guest:guest@rabbitmq:5672/%2F",
},
},
}
執行使用者伺服器¶
Because the work of running consumers is decoupled from the work of talking to HTTP, WebSocket and other client connections, you need to run a cluster of “worker servers” to do all the processing.
Each server is single-threaded, so it’s recommended you run around one or two per core on each machine; it’s safe to run as many concurrent workers on the same machine as you like, as they don’t open any ports (all they do is talk to the channel backend).
To run a worker server, just run:
python manage.py runworker
Make sure you run this inside an init system or a program like supervisord that can take care of restarting the process when it exits; the worker server has no retry-on-exit logic, though it will absorb tracebacks from inside consumers and forward them to stderr.
Make sure you keep an eye on how busy your workers are; if they get overloaded, requests will take longer and longer to return as the messages queue up (until the expiry or capacity limit is reached, at which point HTTP connections will start dropping).
In a more complex project, you won’t want all your channels being served by the same workers, especially if you have long-running tasks (if you serve them from the same workers as HTTP requests, there’s a chance long-running tasks could block up all the workers and delay responding to HTTP requests).
To manage this, it’s possible to tell workers to either limit themselves to
just certain channel names or ignore specific channels using the
--only-channels
and --exclude-channels
options. Here’s an example
of configuring a worker to only serve HTTP and WebSocket requests:
python manage.py runworker --only-channels=http.* --only-channels=websocket.*
或是告訴工作者忽略 “thumbnail” channel 上的所有訊息
python manage.py runworker --exclude-channels=thumbnail
執行介面伺服器¶
The final piece of the puzzle is the “interface servers”, the processes that do the work of taking incoming requests and loading them into the channels system.
If you want to support WebSockets, long-poll HTTP requests and other Channels features, you’ll need to run a native ASGI interface server, as the WSGI specification has no support for running these kinds of requests concurrently. We ship with an interface server that we recommend you use called Daphne; it supports WebSockets, long-poll HTTP requests, HTTP/2 and performs quite well.
You can just keep running your Django code as a WSGI app if you like, behind something like uwsgi or gunicorn; this won’t let you support WebSockets, though, so you’ll need to run a separate interface server to terminate those connections and configure routing in front of your interface and WSGI servers to route requests appropriately.
If you use Daphne for all traffic, it auto-negotiates between HTTP and WebSocket, so there’s no need to have your WebSockets on a separate domain or path (and they’ll be able to share cookies with your normal view code, which isn’t possible if you separate by domain rather than path).
To run Daphne, it just needs to be supplied with a channel backend, in much
the same way a WSGI server needs to be given an application.
First, make sure your project has an asgi.py
file that looks like this
(it should live next to wsgi.py
):
import os
from channels.asgi import get_channel_layer
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings")
channel_layer = get_channel_layer()
Then, you can run Daphne and supply the channel layer as the argument:
daphne my_project.asgi:channel_layer
Like runworker
, you should place this inside an init system or something
like supervisord to ensure it is re-run if it exits unexpectedly.
If you only run Daphne and no workers, all of your page requests will seem to
hang forever; that’s because Daphne doesn’t have any worker servers to handle
the request and it’s waiting for one to appear (while runserver
also uses
Daphne, it launches worker threads along with it in the same process). In this
scenario, it will eventually time out and give you a 503 error after 2 minutes;
you can configure how long it waits with the --http-timeout
command line
argument.
Deploying new versions of code¶
One of the benefits of decoupling the client connection handling from work processing is that it means you can run new code without dropping client connections; this is especially useful for WebSockets.
Just restart your workers when you have new code (by default, if you send them SIGTERM they’ll cleanly exit and finish running any in-process consumers), and any queued messages or new connections will go to the new workers. As long as the new code is session-compatible, you can even do staged rollouts to make sure workers on new code aren’t experiencing high error rates.
There’s no need to restart the WSGI or WebSocket interface servers unless
you’ve upgraded the interface server itself or changed the CHANNEL_LAYER
setting; none of your code is used by them, and all middleware and code that can
customize requests is run on the consumers.
You can even use different Python versions for the interface servers and the workers; the ASGI protocol that channel layers communicate over is designed to be portable across all Python versions.
Running just ASGI¶
If you are just running Daphne to serve all traffic, then the configuration above is enough where you can just expose it to the Internet and it’ll serve whatever kind of request comes in; for a small site, just the one Daphne instance and four or five workers is likely enough.
However, larger sites will need to deploy things at a slightly larger scale, and how you scale things up is different from WSGI; see Scaling Up.
Running ASGI alongside WSGI¶
ASGI and its canonical interface server Daphne are both relatively new, and so you may not wish to run all your traffic through it yet (or you may be using specialized features of your existing WSGI server).
If that’s the case, that’s fine; you can run Daphne and a WSGI server alongside each other, and only have Daphne serve the requests you need it to (usually WebSocket and long-poll HTTP requests, as these do not fit into the WSGI model).
To do this, just set up your Daphne to serve as we discussed above, and then configure your load-balancer or front HTTP server process to dispatch requests to the correct server - based on either path, domain, or if you can, the Upgrade header.
Dispatching based on path or domain means you’ll need to design your WebSocket
URLs carefully so you can always tell how to route them at the load-balancer
level; the ideal thing is to be able to look for the Upgrade: WebSocket
header and distinguish connections by this, but not all software supports this
and it doesn’t help route long-poll HTTP connections at all.
You could also invert this model, and have all connections go to Daphne by default and selectively route some back to the WSGI server, if you have particular URLs or domains you want to use that server on.
Running on a PaaS¶
To run Django with channels enabled on a Platform-as-a-Service (PaaS), you will
need to ensure that your PaaS allows you to run multiple processes at different
scaling levels; one group will be running Daphne, as a pure Python application
(not a WSGI application), and the other should be running runworker
.
The PaaS will also either have to provide either its own Redis service or a third process type that lets you run Redis yourself to use the cross-network channel backend; both interface and worker processes need to be able to see Redis, but not each other.
If you are only allowed one running process type, it’s possible you could combine both interface server and worker into one process using threading and the in-memory backend; however, this is not recommended for production use as you cannot scale up past a single node without groups failing to work.
Scaling Up¶
Scaling up a deployment containing channels (and thus running ASGI) is a little different to scaling a WSGI deployment.
The fundamental difference is that the group mechanic requires all servers serving the same site to be able to see each other; if you separate the site up and run it in a few, large clusters, messages to groups will only deliver to WebSockets connected to the same cluster. For some site designs this will be fine, and if you think you can live with this and design around it (which means never designing anything around global notifications or events), this may be a good way to go.
For most projects, you’ll need to run a single channel layer at scale in order to achieve proper group delivery. Different backends will scale up differently, but the Redis backend can use multiple Redis servers and spread the load across them using sharding based on consistent hashing.
The key to a channel layer knowing how to scale a channel’s delivery is if it
contains the !
character or not, which signifies a single-reader channel.
Single-reader channels are only ever connected to by a single process, and so
in the Redis case are stored on a single, predictable shard. Other channels
are assumed to have many workers trying to read them, and so messages for
these can be evenly divided across all shards.
Django channels are still relatively new, and so it’s likely that we don’t yet know the full story about how to scale things up; we run large load tests to try and refine and improve large-project scaling, but it’s no substitute for actual traffic. If you’re running channels at scale, you’re encouraged to send feedback to the Django team and work with us to hone the design and performance of the channel layer backends, or you’re free to make your own; the ASGI specification is comprehensive and comes with a conformance test suite, which should aid in any modification of existing backends or development of new ones.
一般消費者¶
Much like Django’s class-based views, Channels has class-based consumers. They provide a way for you to arrange code so it’s highly modifiable and inheritable, at the slight cost of it being harder to figure out the execution path.
We recommend you use them if you find them valuable; normal function-based consumers are also entirely valid, however, and may result in more readable code for simpler tasks.
There is one base generic consumer class, BaseConsumer
, that provides
the pattern for method dispatch and is the thing you can build entirely
custom consumers on top of, and then protocol-specific subclasses that provide
extra utility - for example, the WebsocketConsumer
provides automatic
group management for the connection.
When you use class-based consumers in routing, you need
to use route_class
rather than route
; route_class
knows how to
talk to the class-based consumer and extract the list of channels it needs
to listen on from it directly, rather than making you pass it in explicitly.
這是一個路由案例:
from channels import route, route_class
channel_routing = [
route_class(consumers.ChatServer, path=r"^/chat/"),
route("websocket.connect", consumers.ws_connect, path=r"^/$"),
]
Class-based consumers are instantiated once for each message they consume,
so it’s safe to store things on self
(in fact, self.message
is the
current message by default, and self.kwargs
are the keyword arguments
passed in from the routing).
基礎¶
The BaseConsumer
class is the foundation of class-based consumers, and what
you can inherit from if you wish to build your own entirely from scratch.
你使用它如這個:
from channels.generic import BaseConsumer
class MyConsumer(BaseConsumer):
method_mapping = {
"channel.name.here": "method_name",
}
def method_name(self, message, **kwargs):
pass
All you need to define is the method_mapping
dictionary, which maps
channel names to method names. The base code will take care of the dispatching
for you, and set self.message
to the current message as well.
If you want to perfom more complicated routing, you’ll need to override the
dispatch()
and channel_names()
methods in order to do the right thing;
remember, though, your channel names cannot change during runtime and must
always be the same for as long as your process runs.
BaseConsumer
and all other generic consumers that inherit from it provide
two instance variables on the class:
self.message
, the Message object representing the message the consumer was called for.self.kwargs
, keyword arguments from the 路由
WebSockets¶
There are two WebSockets generic consumers; one that provides group management, simpler send/receive methods, and basic method routing, and a subclass which additionally automatically serializes all messages sent and receives using JSON.
The basic WebSocket generic consumer is used like this:
from channels.generic.websockets import WebsocketConsumer
class MyConsumer(WebsocketConsumer):
# Set to True to automatically port users from HTTP cookies
# (you don't need channel_session_user, this implies it)
http_user = True
# Set to True if you want it, else leave it out
strict_ordering = False
def connection_groups(self, **kwargs):
"""
Called to return the list of groups to automatically add/remove
this connection to/from.
"""
return ["test"]
def connect(self, message, **kwargs):
"""
Perform things on connection start
"""
# Accept the connection; this is done by default if you don't override
# the connect function.
self.message.reply_channel.send({"accept": True})
def receive(self, text=None, bytes=None, **kwargs):
"""
Called when a message is received with either text or bytes
filled out.
"""
# Simple echo
self.send(text=text, bytes=bytes)
def disconnect(self, message, **kwargs):
"""
Perform things on connection close
"""
pass
You can call self.send
inside the class to send things to the connection’s
reply_channel
automatically. Any group names returned from connection_groups
are used to add the socket to when it connects and to remove it from when it
disconnects; you get keyword arguments too if your URL path, say, affects
which group to talk to.
另外,”self.path”屬性只設於目前的URL路徑
The JSON-enabled consumer looks slightly different:
from channels.generic.websockets import JsonWebsocketConsumer
class MyConsumer(JsonWebsocketConsumer):
# Set to True if you want it, else leave it out
strict_ordering = False
def connection_groups(self, **kwargs):
"""
Called to return the list of groups to automatically add/remove
this connection to/from.
"""
return ["test"]
def connect(self, message, **kwargs):
"""
Perform things on connection start
"""
pass
def receive(self, content, **kwargs):
"""
Called when a message is received with decoded JSON content
"""
# Simple echo
self.send(content)
def disconnect(self, message, **kwargs):
"""
Perform things on connection close
"""
pass
# Optionally provide your own custom json encoder and decoder
# @classmethod
# def decode_json(cls, text):
# return my_custom_json_decoder(text)
#
# @classmethod
# def encode_json(cls, content):
# return my_custom_json_encoder(content)
For this subclass, receive
only gets a content
argument that is the
already-decoded JSON as Python datastructures; similarly, send
now only
takes a single argument, which it JSON-encodes before sending down to the
client.
Note that this subclass still can’t intercept Group.send()
calls to make
them into JSON automatically, but it does provide self.group_send(name, content)
that will do this for you if you call it explicitly.
self.close()
is also provided to easily close the WebSocket from the
server end with an optional status code once you are done with it.
WebSocket Multiplexing¶
Channels provides a standard way to multiplex different data streams over
a single WebSocket, called a Demultiplexer
.
It expects JSON-formatted WebSocket frames with two keys, stream
and
payload
, and will match the stream
against the mapping to find a
channel name. It will then forward the message onto that channel while
preserving reply_channel
, so you can hook consumers up to them directly
in the routing.py
file, and use authentication decorators as you wish.
基於消費者的使用案例:
from channels.generic.websockets import WebsocketDemultiplexer, JsonWebsocketConsumer
class EchoConsumer(JsonWebsocketConsumer):
def connect(self, message, multiplexer, **kwargs):
# Send data with the multiplexer
multiplexer.send({"status": "I just connected!"})
def disconnect(self, message, multiplexer, **kwargs):
print("Stream %s is closed" % multiplexer.stream)
def receive(self, content, multiplexer, **kwargs):
# Simple echo
multiplexer.send({"original_message": content})
class AnotherConsumer(JsonWebsocketConsumer):
def receive(self, content, multiplexer=None, **kwargs):
# Some other actions here
pass
class Demultiplexer(WebsocketDemultiplexer):
# Wire your JSON consumers here: {stream_name : consumer}
consumers = {
"echo": EchoConsumer,
"other": AnotherConsumer,
}
# Optionally provide a custom multiplexer class
# multiplexer_class = MyCustomJsonEncodingMultiplexer
The multiplexer
allows the consumer class to be independent of the stream name.
It holds the stream name and the demultiplexer on the attributes stream
and demultiplexer
.
The data binding code will also send out messages to clients
in the same format, and you can encode things in this format yourself by
using the WebsocketDemultiplexer.encode
class method.
會期和使用者¶
If you wish to use channel_session
or channel_session_user
with a
class-based consumer, simply set one of the variables in the class body:
class MyConsumer(WebsocketConsumer):
channel_session_user = True
This will run the appropriate decorator around your handler methods, and provide
message.channel_session
and message.user
on the message object - both
the one passed in to your handler as an argument as well as self.message
,
as they point to the same instance.
And if you just want to use the user from the django session, add http_user
:
class MyConsumer(WebsocketConsumer):
http_user = True
This will give you message.user
, which will be the same as request.user
would be on a regular View.
Applying Decorators¶
To apply decorators to a class-based consumer, you’ll have to wrap a functional
part of the consumer; in this case, get_handler
is likely the place you
want to override; like so:
class MyConsumer(WebsocketConsumer):
def get_handler(self, *args, **kwargs):
handler = super(MyConsumer, self).get_handler(*args, **kwargs)
return your_decorator(handler)
You can also use the Django method_decorator
utility to wrap methods that
have message
as their first positional argument - note that it won’t work
for more high-level methods, like WebsocketConsumer.receive
.
As route¶
Instead of making routes using route_class
you may use the as_route
shortcut.
This function takes route filters (篩選器) as kwargs and returns
route_class
. For example:
from . import consumers
channel_routing = [
consumers.ChatServer.as_route(path=r"^/chat/"),
]
Use the attrs
dict keyword for dynamic class attributes. For example you have
the generic consumer:
class MyGenericConsumer(WebsocketConsumer):
group = 'default'
group_prefix = ''
def connection_groups(self, **kwargs):
return ['_'.join(self.group_prefix, self.group)]
You can create consumers with different group
and group_prefix
with attrs
,
like so:
from . import consumers
channel_routing = [
consumers.MyGenericConsumer.as_route(path=r"^/path/1/",
attrs={'group': 'one', 'group_prefix': 'pre'}),
consumers.MyGenericConsumer.as_route(path=r"^/path/2/",
attrs={'group': 'two', 'group_prefix': 'public'}),
]
路由¶
Channels 中的路由是使用一個較 Django 核心簡單的系統來達成的。給予一個所有可能路由的序列,Channels 將會遍歷所有的可能直到發現一個相符的路由,然後該路由的 consumer 將會被執行。
The difference comes, however, in the fact that Channels has to route based on more than just URL; channel name is the main thing routed on, and URL path is one of many other optional things you can route on, depending on the protocol (for example, imagine email consumers - they would route on domain or recipient address instead).
The routing Channels takes is just a list of routing objects - the three
built in ones are route
, route_class
and include
, but any object
that implements the routing interface will work:
- A method called
match
, taking a singlemessage
as an argument and returningNone
for no match or a tuple of(consumer, kwargs)
if matched. 有個功能鍵叫”channel_names”可回覆一組可搭配的 channel 名稱,這使 channel 層可以管控他們
以下是三個預設的路由物件:
route
: Takes a channel name, a consumer function, and optional filter keyword arguments.route_class
: Takes a class-based consumer, and optional filter keyword arguments. Channel names are taken from the consumer’schannel_names()
method.include
: Takes either a list or string import path to a routing list, and optional filter keyword arguments.
篩選器¶
Filtering is how you limit matches based on, for example, URLs; you use regular expressions, like so:
route("websocket.connect", consumers.ws_connect, path=r"^/chat/$")
備註
和Django內建的URL Routing不同。在Django內建的Routing裡,第一個/會被略去,已求簡潔。然而在Channels中,第一個/是被保留的。這是因為Routing是通用的,且不是只為了URL所設計。
你可以使用多重過濾器:
route("email.receive", comment_response, to_address=r".*@example.com$", subject="^reply")
Multiple filters are always combined with logical AND; that is, you need to match every filter to have the consumer called.
Filters can capture keyword arguments to be passed to your function or your class based consumer methods as a kwarg
:
route("websocket.connect", connect_blog, path=r'^/liveblog/(?P<slug>[^/]+)/stream/$')
你也可以指定過濾”include”:
include("blog_includes", path=r'^/liveblog')
When you specify filters on include
, the matched portion of the attribute
is removed for matches inside the include; for example, this arrangement
matches URLs like /liveblog/stream/
, because the outside include
strips off the /liveblog
part it matches before passing it inside:
inner_routes = [
route("websocket.connect", connect_blog, path=r'^/stream/'),
]
routing = [
include(inner_routes, path=r'^/liveblog')
]
You can also include named capture groups in the filters on an include and
they’ll be passed to the consumer just like those on route
; note, though,
that if the keyword argument names from the include
and the route
clash, the values from route
will take precedence.
資料綁定¶
Channel 的資料綁定架構會自動處理 Django 的 model 寫入前端 view 中,例如使用 javascript 強化的網站。它提供了一個快速且彈性的方式來產生 Group 的 model 改變 message,以及接收 model 發生變化時的 message。
當前主要的目標是 WebSocket,但此架構有相當的彈性可以支援任何通訊協定。
資料綁定可以接受什麼?¶
Channel 的資料綁定以兩種方式運作:
發送,當 model 透過 Django 發生變化時,訊息會發送到監聽的客戶端。這包含了事例的建立、更新與刪除。
接收,標準化的訊息格式,允許客戶端發送訊息來建立、更新與刪除事例。
收發,允許 UI 可以設計成自動更新反映客戶端更新的數值。例如,網誌的即時更新可以藉由 PO 文物件的資料綁定來達成,而編輯介面也可以藉此同步顯示其他使用者的修改。
It has some limitations:
發送的資料綁定是藉由 signal 來達成的,所以假使 model 的資料更新不是透過 Django (或是使用 QuerySet 的
.update()
函式),就沒有觸發的 signal,改變的訊息就不會被送出。你可以自己觸發改變,但是你會需要從系統中正確的來源來送出這個 signal。內建的序列化是來自 Django 的內建功能,它只能處理特定的資料型態。如果需要有更大的彈性,你可以透過像是 Django REST 架構的序列化函式庫來達成。
入門¶
單一的綁定子類別用來處理 model 發送與接收的綁定,你也可以在每個 model 使用多個綁定 (例如如果你想使用不同的格式或權限檢查)。
你可以自底層的 Binding 實作所有需要的函式,但我們這裡把重點放在 WebSocket JSON 變形上,因著這是最簡單的入手點而且最接近你可能需要的部分。
從這裡開始:
from django.db import models
from channels.binding.websockets import WebsocketBinding
class IntegerValue(models.Model):
name = models.CharField(max_length=100, unique=True)
value = models.IntegerField(default=0)
class IntegerValueBinding(WebsocketBinding):
model = IntegerValue
stream = "intval"
fields = ["name", "value"]
@classmethod
def group_names(cls, instance):
return ["intval-updates"]
def has_permission(self, user, action, pk):
return True
這裡定義一個 WebSocket 的綁定 - 如此就知道如何送出 JSON WebSocket 格式的頁框 - 並且提供三件你必須提供的部分:
fields
是一個序列化請求可傳回欄位的白名單。Channel 預設不開啟所有的欄位,主要是基於安全性的考量。如果你想全部開啟的話,把該列表設為["__all__"]
即可。另一方便,也可以使用exclude
來建立黑名單。group_names
傳回一個基於該事例的外送更新群組列表。例如,你可以發送PO文到名稱包含父網誌 ID 的不同即時網誌中。這裡我們只用一個固定的群組名稱。基於group_names
如何隨著事例的改變,Channels 將會處理客戶端需要的create
,update
或delete
等訊息 (或是改變是對客戶端隱藏的)。has_permission
則傳回一個接收綁定更新,是否會被 model 執行的許可與否。我們採取了一個非常不安全的作法,總是回傳True
。但是這裡就是你可以讓 Django 做檢查或是自行撰寫權限系統的地方。
做為參考, action
總是以下 "create"
, "update"
或 "delete"
之一的萬國碼字串。你也可以提供 WebSocket Multiplexing 串流名稱給客戶端,如果使用 WebSocket 資料綁定,你必須使用多工化。
只要如此新增一個綁定在匯入的地方,發送綁定訊息就會被送出,但你仍需要提供一個 Consumer 來接受進來的綁定更新,並且在連線時將人加到正確的群組。WebSocket 綁定類別使用標準的 WebSocket Multiplexing ,因此你只需要使用它。
from channels.generic.websockets import WebsocketDemultiplexer
from .binding import IntegerValueBinding
class Demultiplexer(WebsocketDemultiplexer):
consumers = {
"intval": IntegerValueBinding.consumer,
}
def connection_groups(self):
return ["intval-updates"]
如同標準的串流對消費者映射,你也需要指定 connection_groups
,一個將上線使用者加入群組的列表。這也符合 group_names
在你的綁定上的邏輯,這裡我們使用一個固定的群組名稱。請注意,綁定有一個 .consumer
屬性,這是一個標準 WebSocket-JSON consumer,解多工器可以發送解開的 websocket.receive
訊息給這個 consumer。
綁到你的路由,這樣子就完成了:
from channels import route_class, route
from .consumers import Demultiplexer
from .models import IntegerValueBinding
channel_routing = [
route_class(Demultiplexer, path="^/binding/"),
]
前端的考量點¶
You can use the standard Channels WebSocket wrapper to
automatically run demultiplexing, and then tie the events you receive into your
frontend framework of choice based on action
, pk
and data
.
備註
我們需要熱門 JavaScript 架構的資料綁定插件,如果你有興趣提供,請和我們聯絡。
客製序列化/通訊協定¶
不同於繼承自 WebsocketBinding
,你可以直接繼承自底層的 Binding
類別,然後自己實作序列化與反序列化。在這部分的參考文件完成之前,我們建議參考 channels/bindings/base.py
原始碼,程式中有相當完整的註解。
斷線的處理¶
由於 Channel 的資料綁定沒有包含事件的歷史,也就是說當網路連線斷開,你會遺失這段時間發生的事例訊息。因此,建議當連線恢復之後,直接透過 API 來重新載入資料,而不要依賴即時更新在關鍵的功能,或是設計 UI 來處理資料遺失的問題。(例如只有更新沒有新建時,下個更新會修正全部的遺失資料)
通道 WebSocket 包裝¶
Channels ships with a javascript WebSocket wrapper to help you connect to your websocket and send/receive messages.
First, you must include the javascript library in your template; if you’re using Django’s staticfiles, this is as easy as:
{% load staticfiles %}
{% static "channels/js/websocketbridge.js" %}
If you are using an alternative method of serving static files, the compiled
source code is located at channels/static/channels/js/websocketbridge.js
in
a Channels installation. We compile the file for you each release; it’s ready
to serve as-is.
The library is deliberately quite low-level and generic; it’s designed to be compatible with any JavaScript code or framework, so you can build more specific integration on top of it.
To process messages
const webSocketBridge = new channels.WebSocketBridge();
webSocketBridge.connect('/ws/');
webSocketBridge.listen(function(action, stream) {
console.log(action, stream);
});
To send messages, use the send method
webSocketBridge.send({prop1: 'value1', prop2: 'value1'});
To demultiplex specific streams
webSocketBridge.connect();
webSocketBridge.listen('/ws/');
webSocketBridge.demultiplex('mystream', function(action, stream) {
console.log(action, stream);
});
webSocketBridge.demultiplex('myotherstream', function(action, stream) {
console.info(action, stream);
});
To send a message to a specific stream
webSocketBridge.stream('mystream').send({prop1: 'value1', prop2: 'value1'})
The WebSocketBridge instance exposes the underlaying ReconnectingWebSocket as the socket property. You can use this property to add any custom behavior. For example
webSocketBridge.socket.addEventListener('open', function() {
console.log("Connected to WebSocket");
})
The library is also available as a npm module, under the name django-channels
通道層類型¶
可以選擇多種後端,以滿足複雜性,吞吐量和可擴展性的不同折衷。你也可以寫你自己的後端,如果你願意;他們確認的規範被稱為 ASGI。可以使用任何符合ASGI的信道層。
Redis¶
Redis 層是運行 Channels 的推薦後端,因為它支持單個 Redis 服務器上的高吞吐量以及在分片模式下對一組 Redis 服務器運行的能力。
要使用 Redis 層,只需從 PyPI 安裝它 (它放在一個單獨的包,因為我們不想強制依賴於 redis-py 主安裝):
pip install -U asgi_redis
默認情況下,它將嘗試連接到 localhost:6379
的 Redis 服務器,但是你可以用 hosts
再複寫它的 config:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "???",
"CONFIG": {
"hosts": [("redis-channel-1", 6379), ("redis-channel-2", 6379)],
},
},
}
分片¶
分片模型基於一致性散列 - 特別是 response channels 被散列,用於選擇接口服務器和 worker 都將使用的單個 Redis 服務器。
對於正常信道,由於任何工作者都可以服務任何信道請求,所以消息簡單地在所有可能的服務器之間分佈,工作者將選擇單個服務器來收聽。注意,如果你運行的 Redis 服務器比 worker 多,很可能有些服務器沒有工作線程監聽它們;我們建議您始終為每個 Redis 服務器至少有十個工作線程,以確保良好的分發。然而,工作者將定期(每五秒左右)改變服務器,因此排隊的消息應該最終得到響應。
請注意,如果更改分片服務器集,您需要在任何工作之前重新啟動所有接口服務器和具有新集的工作線程,並且任何正在傳輸的消息都將丟失(即使有持久性,也會有)。一致性哈希模型依賴於具有相同設置的所有運行的客戶端。任何配置錯誤的接口服務器或工作程序將刪除部分或全部消息。
RabbitMQ¶
RabbitMQ layer is comparable to Redis in terms of latency and throughput. It can work with single RabbitMQ node and with Erlang cluster.
You need to install layer package from PyPI:
pip install -U asgi_rabbitmq
To use it you also need provide link to the virtual host with granted permissions:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_rabbitmq.RabbitmqChannelLayer",
"ROUTING": "???",
"CONFIG": {
"url": "amqp://guest:guest@rabbitmq:5672/%2F",
},
},
}
This layer has complete documentation on its own.
IPC¶
IPC 後端使用 POSIX 共享內存段和信號量,以允許同一機器上的不同進程相互通信。
由於它使用共享內存,它不需要任何額外的服務器運行來工作,並且快於任何基於網絡的通道層。但是,它只能在同一台機器上的進程之間運行。
警告
IPC 層只在同一台機器上的進程之間進行通信,雖然最初可能會試圖運行一組具有自己的基於IPC的進程集合的機器,但這會導致組無法正常工作;發送到群組的事件將只會轉到在同一台計算機上加入群組的那些頻道。此後端僅用於單機部署。
In-memory¶
in-memory 層僅在單個進程中運行協議服務器和工作服務器時有用;最常見的情況是``runserver``,其中服務器線程,這個通道層和工作線程都在同一個 python 進程內共存。
它的路徑是``asgiref.inmemory.ChannelLayer``。如果你嘗試和``runworker``使用這個通道層,它將退出,因為它不支持跨進程通信。
編寫自定義通道層¶
The interface channel layers present to Django and other software that communicates over them is codified in a specification called ASGI.
Any channel layer that conforms to the ASGI spec can be used
by Django; just set BACKEND
to the class to instantiate and CONFIG
to
a dict of keyword arguments to initialize the class with.
延遲伺服器¶
Channels 裡面的一個選擇性 app channels.delay
實做了 ASGI Delay Protocol.
Server 透過一個自訂的 rundelay
指令,which listens to the asgi.delay channel for messages to delay.
從延遲入門開始¶
安裝app加’channels.delay’到’INSTALLED_APPS’:
INSTALLED_APPS = (
...
'channels',
'channels.delay'
)
Run migrate to create the tables
python manage.py migrate
執行
python manage.py rundelay
現在你可以開始進行將訊息延遲
訊息延遲¶
To delay a message by a fixed number of milliseconds use the delay parameter.
這是一個案例:
from channels import Channel
delayed_message = {
'channel': 'example_channel',
'content': {'x': 1},
'delay': 10 * 1000
}
# The message will be delayed 10 seconds by the server and then sent
Channel('asgi.delay').send(delayed_message, immediately=True)
測試消費者¶
When you want to write unit tests for your new Channels consumers, you’ll realize that you can’t use the standard Django test client to submit fake HTTP requests - instead, you’ll need to submit fake Messages to your consumers, and inspect what Messages they send themselves.
We provide a TestCase
subclass that sets all of this up for you,
however, so you can easily write tests and check what your consumers are sending.
Channel 測試案例¶
If your tests inherit from the channels.test.ChannelTestCase
base class,
whenever you run tests your channel layer will be swapped out for a captive
in-memory layer, meaning you don’t need an external server running to run tests.
此外,您可以將訊息放置到此層,並幫助您測試消費者查看訊息發送。
To inject a message onto the layer, simply call Channel.send()
inside
any test method on a ChannelTestCase
subclass, like so:
from channels import Channel
from channels.test import ChannelTestCase
class MyTests(ChannelTestCase):
def test_a_thing(self):
# This goes onto an in-memory channel, not the real backend.
Channel("some-channel-name").send({"foo": "bar"})
To receive a message from the layer, you can use self.get_next_message(channel)
,
which handles receiving the message and converting it into a Message object for
you (if you want, you can call receive_many
on the underlying channel layer,
but you’ll get back a raw dict and channel name, which is not what consumers want).
You can use this both to get Messages to send to consumers as their primary argument, as well as to get Messages from channels that consumers are supposed to send on to verify that they did.
You can even pass require=True
to get_next_message
to make the test
fail if there is no message on the channel (by default, it will return you
None
instead).
Here’s an extended example testing a consumer that’s supposed to take a value
and post the square of it to the "result"
channel:
from channels import Channel
from channels.test import ChannelTestCase
class MyTests(ChannelTestCase):
def test_a_thing(self):
# Inject a message onto the channel to use in a consumer
Channel("input").send({"value": 33})
# Run the consumer with the new Message object
my_consumer(self.get_next_message("input", require=True))
# Verify there's a result and that it's accurate
result = self.get_next_message("result", require=True)
self.assertEqual(result['value'], 1089)
Generic Consumers¶
You can use ChannelTestCase
to test generic consumers as well. Just pass the message
object from get_next_message
to the constructor of the class. To test replies to a specific channel,
use the reply_channel
property on the Message
object. For example:
from channels import Channel
from channels.test import ChannelTestCase
from myapp.consumers import MyConsumer
class MyTests(ChannelTestCase):
def test_a_thing(self):
# Inject a message onto the channel to use in a consumer
Channel("input").send({"value": 33})
# Run the consumer with the new Message object
message = self.get_next_message("input", require=True)
MyConsumer(message)
# Verify there's a reply and that it's accurate
result = self.get_next_message(message.reply_channel.name, require=True)
self.assertEqual(result['value'], 1089)
群組¶
You can test Groups in the same way as Channels inside a ChannelTestCase
;
the entire channel layer is flushed each time a test is run, so it’s safe to
do group adds and sends during a test. For example:
from channels import Group
from channels.test import ChannelTestCase
class MyTests(ChannelTestCase):
def test_a_thing(self):
# Add a test channel to a test group
Group("test-group").add("test-channel")
# Send to the group
Group("test-group").send({"value": 42})
# Verify the message got into the destination channel
result = self.get_next_message("test-channel", require=True)
self.assertEqual(result['value'], 42)
Clients¶
For more complicated test suites you can use the Client
abstraction that
provides an easy way to test the full life cycle of messages with a couple of methods:
send
to sending message with given content to the given channel, consume
to run appointed consumer for the next message, receive
to getting replies for client.
Very often you may need to send
and than call a consumer one by one, for this
purpose use send_and_consume
method:
from channels.test import ChannelTestCase, Client
class MyTests(ChannelTestCase):
def test_my_consumer(self):
client = Client()
client.send_and_consume('my_internal_channel', {'value': 'my_value'})
self.assertEqual(client.receive(), {'all is': 'done'})
You can use WSClient
for websocket related consumers. It automatically serializes JSON content,
manage cookies and headers, give easy access to the session and add ability to authorize your requests.
For example:
# consumers.py
class RoomConsumer(JsonWebsocketConsumer):
http_user = True
groups = ['rooms_watchers']
def receive(self, content, **kwargs):
self.send({'rooms': self.message.http_session.get("rooms", [])})
Channel("rooms_receive").send({'user': self.message.user.id,
'message': content['message']}
# tests.py
from channels import Group
from channels.test import ChannelTestCase, WSClient
class RoomsTests(ChannelTestCase):
def test_rooms(self):
client = WSClient()
user = User.objects.create_user(
username='test', email='test@test.com', password='123456')
client.login(username='test', password='123456')
client.send_and_consume('websocket.connect', path='/rooms/')
# check that there is nothing to receive
self.assertIsNone(client.receive())
# test that the client in the group
Group(RoomConsumer.groups[0]).send({'text': 'ok'}, immediately=True)
self.assertEqual(client.receive(json=False), 'ok')
client.session['rooms'] = ['test', '1']
client.session.save()
client.send_and_consume('websocket.receive',
text={'message': 'hey'},
path='/rooms/')
# test 'response'
self.assertEqual(client.receive(), {'rooms': ['test', '1']})
self.assertEqual(self.get_next_message('rooms_receive').content,
{'user': user.id, 'message': 'hey'})
# There is nothing to receive
self.assertIsNone(client.receive())
Instead of WSClient.login
method with credentials at arguments you
may call WSClient.force_login
(like at django client) with the user object.
receive
method by default trying to deserialize json text content of a message,
so if you need to pass decoding use receive(json=False)
, like in the example.
For testing consumers with enforce_ordering
initialize HttpClient
with ordered
flag, but if you wanna use your own order don’t use it, use content:
client = HttpClient(ordered=True)
client.send_and_consume('websocket.receive', text='1', path='/ws') # order = 0
client.send_and_consume('websocket.receive', text='2', path='/ws') # order = 1
client.send_and_consume('websocket.receive', text='3', path='/ws') # order = 2
# manually
client = HttpClient()
client.send('websocket.receive', content={'order': 0}, text='1')
client.send('websocket.receive', content={'order': 2}, text='2')
client.send('websocket.receive', content={'order': 1}, text='3')
# calling consume 4 time for `waiting` message with order 1
client.consume('websocket.receive')
client.consume('websocket.receive')
client.consume('websocket.receive')
client.consume('websocket.receive')
應用路由¶
When you need to test your consumers without routes in settings or you
want to test your consumers in a more isolate and atomic way, it will be
simpler with apply_routes
contextmanager and decorator for your ChannelTestCase
.
It takes a list of routes that you want to use and overwrites existing routes:
from channels.test import ChannelTestCase, WSClient, apply_routes
class MyTests(ChannelTestCase):
def test_myconsumer(self):
client = WSClient()
with apply_routes([MyConsumer.as_route(path='/new')]):
client.send_and_consume('websocket.connect', '/new')
self.assertEqual(client.receive(), {'key': 'value'})
Test Data binding with WSClient
¶
As you know data binding in channels works in outbound and inbound ways,
so that ways tests in different ways and WSClient
and apply_routes
will help to do this.
When you testing outbound consumers you need just import your Binding
subclass with specified group_names
. At test you can join to one of them,
make some changes with target model and check received message.
Lets test IntegerValueBinding
from data binding
with creating:
from channels.test import ChannelTestCase, WSClient
from channels.signals import consumer_finished
class TestIntegerValueBinding(ChannelTestCase):
def test_outbound_create(self):
# We use WSClient because of json encoding messages
client = WSClient()
client.join_group("intval-updates") # join outbound binding
# create target entity
value = IntegerValue.objects.create(name='fifty', value=50)
received = client.receive() # receive outbound binding message
self.assertIsNotNone(received)
self.assertTrue('payload' in received)
self.assertTrue('action' in received['payload'])
self.assertTrue('data' in received['payload'])
self.assertTrue('name' in received['payload']['data'])
self.assertTrue('value' in received['payload']['data'])
self.assertEqual(received['payload']['action'], 'create')
self.assertEqual(received['payload']['model'], 'values.integervalue')
self.assertEqual(received['payload']['pk'], value.pk)
self.assertEqual(received['payload']['data']['name'], 'fifty')
self.assertEqual(received['payload']['data']['value'], 50)
# assert that is nothing to receive
self.assertIsNone(client.receive())
There is another situation with inbound binding. It is used with WebSocket Multiplexing, So we apply two routes: websocket route for demultiplexer and route with internal consumer for binding itself, connect to websocket entrypoint and test different actions. For example:
class TestIntegerValueBinding(ChannelTestCase):
def test_inbound_create(self):
# check that initial state is empty
self.assertEqual(IntegerValue.objects.all().count(), 0)
with apply_routes([Demultiplexer.as_route(path='/'),
route("binding.intval", IntegerValueBinding.consumer)]):
client = WSClient()
client.send_and_consume('websocket.connect', path='/')
client.send_and_consume('websocket.receive', path='/', text={
'stream': 'intval',
'payload': {'action': CREATE, 'data': {'name': 'one', 'value': 1}}
})
# our Demultiplexer route message to the inbound consumer,
# so we need to call this consumer
client.consume('binding.users')
self.assertEqual(IntegerValue.objects.all().count(), 1)
value = IntegerValue.objects.all().first()
self.assertEqual(value.name, 'one')
self.assertEqual(value.value, 1)
Multiple Channel Layers¶
If you want to test code that uses multiple channel layers, specify the alias
of the layers you want to mock as the test_channel_aliases
attribute on
the ChannelTestCase
subclass; by default, only the default
layer is
mocked.
You can pass an alias
argument to get_next_message
, Client
and Channel
to use a different layer too.
Live Server Test Case¶
You can use browser automation libraries like Selenium or Splinter to
check your application against real layer installation. First of all
provide TEST_CONFIG
setting to prevent overlapping with running
dev environment.
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"hosts": [("redis-server-name", 6379)],
},
"TEST_CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}
Now use ChannelLiveServerTestCase
for your acceptance tests.
from channels.test import ChannelLiveServerTestCase
from splinter import Browser
class IntegrationTest(ChannelLiveServerTestCase):
def test_browse_site_index(self):
with Browser() as browser:
browser.visit(self.live_server_url)
# the rest of your integration test...
In the test above Daphne and Channels worker processes were fired up.
These processes run your project against the test database and the
default channel layer you spacify in the settings. If channel layer
support flush
extension, initial cleanup will be done. So do not
run this code against your production environment. When channels
infrastructure is ready default web browser will be also started. You
can open your website in the real browser which can execute JavaScript
and operate on WebSockets. live_server_ws_url
property is also
provided if you decide to run messaging directly from Python.
By default live server test case will serve static files. To disable this feature override serve_static class attribute.
class IntegrationTest(ChannelLiveServerTestCase):
serve_static = False
def test_websocket_message(self):
# JS and CSS are not available in this test.
...
參考¶
消費者¶
When you configure channel routing, the object assigned to a channel
should be a callable that takes exactly one positional argument, here
called message
, which is a message object. A consumer
is any callable that fits this definition.
Consumers are not expected to return anything, and if they do, it will be
ignored. They may raise channels.exceptions.ConsumeLater
to re-insert
their current message at the back of the channel it was on, but be aware you
can only do this so many time (10 by default) until the message is dropped
to avoid deadlocking.
訊息¶
Message objects are what consumers get passed as their only argument. They
encapsulate the basic ASGI message, which is a dict
, with
extra information. They have the following attributes:
content
: The actual message content, as a dict. See the ASGI spec or protocol message definition document for how this is structured.channel
: A Channel object, representing the channel this message was received on. Useful if one consumer handles multiple channels.reply_channel
: A Channel object, representing the unique reply channel for this message, orNone
if there isn’t one.channel_layer
: A ChannelLayer object, representing the underlying channel layer this was received on. This can be useful in projects that have more than one layer to identify where to send messages the consumer generates (you can pass it to the constructor of Channel or Group)
Channel¶
Channel objects are a simple abstraction around ASGI channels, which by default are unicode strings. The constructor looks like this:
channels.Channel(name, alias=DEFAULT_CHANNEL_LAYER, channel_layer=None)
Normally, you’ll just call Channel("my.channel.name")
and it’ll make the
right thing, but if you’re in a project with multiple channel layers set up,
you can pass in either the layer alias or the layer object and it’ll send
onto that one instead. They have the following attributes:
name
: The unicode string representing the channel name.channel_layer
: A ChannelLayer object, representing the underlying channel layer to send messages on.send(content)
: Sends thedict
provided as content over the channel. The content should conform to the relevant ASGI spec or protocol definition.
群組¶
Groups represent the underlying ASGI group concept in an object-oriented way. The constructor looks like this:
channels.Group(name, alias=DEFAULT_CHANNEL_LAYER, channel_layer=None)
Like Channel, you would usually just pass a name
, but
can pass a layer alias or object if you want to send on a non-default one.
They have the following attributes:
name
: The unicode string representing the group name.channel_layer
: A ChannelLayer object, representing the underlying channel layer to send messages on.send(content)
: Sends thedict
provided as content to all members of the group.add(channel)
: Adds the given channel (as either a Channel object or a unicode string name) to the group. If the channel is already in the group, does nothing.discard(channel)
: Removes the given channel (as either a Channel object or a unicode string name) from the group, if it’s in the group. Does nothing otherwise.
Channel 層¶
These are a wrapper around the underlying ASGI channel layers that supplies a routing system that maps channels to consumers, as well as aliases to help distinguish different layers in a project with multiple layers.
You shouldn’t make these directly; instead, get them by alias (default
is
the default alias):
from channels import channel_layers
layer = channel_layers["default"]
他們有以下屬性:
alias
: The alias of this layer.router
: An object which represents the layer’s mapping of channels to consumers. Has the following attributes:channels
: The set of channels this router can handle, as unicode stringsmatch(message)
: Takes a Message and returns either a (consumer, kwargs) tuple specifying the consumer to run and the keyword argument to pass that were extracted via routing patterns, or None, meaning there’s no route available.
AsgiRequest¶
This is a subclass of django.http.HttpRequest
that provides decoding from
ASGI requests, and a few extra methods for ASGI-specific info. The constructor is:
channels.handler.AsgiRequest(message)
message
must be an ASGI http.request
format message.
Additional attributes are:
reply_channel
, a Channel object that represents thehttp.response.?
reply channel for this request.message
, the raw ASGI message passed in the constructor.
AsgiHandler¶
This is a class in channels.handler
that’s designed to handle the workflow
of HTTP requests via ASGI messages. You likely don’t need to interact with it
directly, but there are two useful ways you can call it:
AsgiHandler(message)
will process the message through the Django view layer and yield one or more response messages to send back to the client, encoded from the DjangoHttpResponse
.encode_response(response)
is a classmethod that can be called with a DjangoHttpResponse
and will yield one or more ASGI messages that are the encoded response.
Decorators¶
Channels provides decorators to assist with persisting data and security.
channel_session
: Provides a session-like object called “channel_session” to consumersas a message attribute that will auto-persist across consumers with the same incoming “reply_channel” value.
Use this to persist data across the lifetime of a connection.
http_session
: Wraps a HTTP or WebSocket connect consumer (or any consumer of messagesthat provides a “cookies” or “get” attribute) to provide a “http_session” attribute that behaves like request.session; that is, it’s hung off of a per-user session key that is saved in a cookie or passed as the “session_key” GET parameter.
It won’t automatically create and set a session cookie for users who don’t have one - that’s what SessionMiddleware is for, this is a simpler read-only version for more low-level code.
If a message does not have a session we can inflate, the “session” attribute will be None, rather than an empty session you can write to.
Does not allow a new session to be set; that must be done via a view. This is only an accessor for any existing session.
channel_and_http_session
: Enables both the channel_session and http_session.Stores the http session key in the channel_session on websocket.connect messages. It will then hydrate the http_session from that same key on subsequent messages.
allowed_hosts_only
: Wraps a WebSocket connect consumer and ensures therequest originates from an allowed host.
Reads the Origin header and only passes request originating from a host listed in
ALLOWED_HOSTS
to the consumer. Requests from other hosts or with a missing or invalid Origin headers are rejected.
常見問題¶
為何使用 Channels,而不直接使用 Tornado/gevent/asyncio/ 等其他的套件?¶
Tornado/gevent/asyncio 有些是用來解決不同的問題。Tornado, gevent 與其他類在進程中的非同步方案是使用單一 Python 的非同步解決方式 - 當一個 HTTP 請求正在執行時執行其他的事情,或是在單一進程中處理數百個連接。
但 Channels 不大相同 - 對於針對 consumers 所撰寫的程式碼都會以同步來執行。你可以做所有可能會阻塞的檔案系統呼叫和 CPU-bound 所綁定的任務,你所需要做的就只是阻斷正在執行的 worker 其他的 worker 流程又會繼續開始執行並且處理其他的訊息。
這部分原因是因為 Django 程式碼全部都是採取同步的方式撰寫,假使將其全部重寫成非同步的方式幾乎不太可能,而且我們也認為一般的開發人員也不需要一定得編寫友善的非同步程式碼,這樣很容易就會拿石頭砸到自己的腳; 執行一個緊密的循環而不用在中間過程 yield,或是在一個非常緩慢的 NFS 分享去存取一個檔案,而你僅僅只需要在進入流程裡阻斷它。
Channels 仍然使用非同步的程式碼,但它會被限制在接口層 - 用來服務 HTTP,WebSocket 與其他請求的行程。這些確實是使用非同步的框架(目前是 asyncio 和 Twisted) 來處理與管理所有的並行連接,但它們也可以是固定的程式碼; 對於終端開發者將永遠不會碰觸到這些。
你可以使用 Python 標準函式庫以及模式來處理所有工作,只有 worker 競爭這件事你需要注意 - 假使你讓 worker 淹沒在無限的迴圈中, 他們當然就會停止工作,但這還是比單一行程執行停止,等待進入來得好。
為何不用 node/go/ 等來作為 Django 的代理呢?¶
有幾個很不錯的解決方案讓你可以使用更 “友善的非同步” 語言(或 Python 框架) 讓 Django 橋接到 WebSocket - 終止他們(比如) 一個 Node 行程, 然後使用反向代理模型, 或Redis信號或其他一些機制將其橋接到Django。
假如你想實際上 Channels 讓這件事變得更容易達到。其中的關鍵就是 Channel 引入標準化的方式來運行 event-triggered 的程式碼片段,以及通過命名通道路由消息的標準化方式,在彈性和易用間達到了平衡。
雖然接口的服務器是以 Python 開發,但這並不會影響或阻止你使用其他語言來撰寫接口伺服器,只要遵循同樣的 HTTP/WebSocket/etc. 相同的序列話標準。事實上有可能會在某些時候發佈一個自己實現的替代伺服器。
為什麼沒有做到一個保證交付/重試機制?¶
Channels 得設計邏輯是這樣的,它允許任何錯誤 - 一個 consumer 可以發生錯誤導致沒有發送回覆,通道層可以重新啟用或是丟棄一些訊息,這可能會發生伺服器延宕與卡頓,也可能會有些新連入的客戶端會被拒絕。
這是因為設計一個可以完全容錯的系統,點到點,會導致吞吐量低到一個難以置信的地步,而且幾乎沒有什麼問題會需要這樣程度的保證。假如你希望一訂程度的保證,你可以建立在 Channels 之上並且新增他(例如,使用資料庫去註記需要清理的事,並且在過一陣時間後重新發送,或是針對 consumers 與過度發送訊息的對象做冪等而非欠送)。
也就是,設計一個系統來預測它可能會失敗部分,並設計檢測以及恢復該狀態,而非將整個功能掛在一個完全按照設計工作的系統上。 Channels 採用這種思想,並使用它來提供大多數可靠的高吞吐量解決方案,而不是幾乎*完全*可靠的低吞吐量解決方案。
我可以在 Django 執行 HTTP requests/service call/etc. 再不阻斷下進行平行嗎?¶
無法直接達到 - Channels 只允許 consumer 功能在開始時 listen channels,這是啟動它的原因; 您無法將 channels 上的任務發送給其他 consumer,然後*等待結果*。 你可以發送它們並且繼續,但是你永遠不能阻斷在 consumer 的頻道上等待,否則你會遭遇 deadlock,livelocks 和類似的問題。
這部分是一個設計特徵 - 屬於“困難的異步概念,很容易拿石頭砸自己的腳” - 同時也保持簡單的底層渠道實現。 通過不允許這種阻塞,可以為通道層規定允許水平縮放和分片的規範。
你所可以做得事:
調度整個任務負載讓其可以延遲在後台運行,然後完成當前任務 - 例如,在頭像上傳視圖中分發頭像縮圖任務,然後回傳 “我們得到它!” HTTP response。
將詳細訊息傳遞給相關可以繼續的其他任務,尤其是與將完成作業的其他 consumer 相關聯 channel 名稱,或數據的 ID 或其他詳細訊息(請記住,訊息內容只是一個可以允許放入您內容得字典) 。例如,您可能需要獲取圖片,存儲圖片,並將生成的 ID 和要附加到的對象的 ID 傳遞到不同 channel 的各種模型的通用圖片抓取任務,具體取決於模型 - 您將在消息中傳遞下一個 channel 名稱和目標對象的 ID,然後 consumer 可以在完成後向該 channel 名稱發送新消息。
有執行請求或緩慢的任務(記住,接口服務器*是*一個會被寫入高度非同步的專業程式碼界面),結束時,他們的結果發送到另個 channel。同樣,你不能在 consumer 內部等待並阻斷結果,但你可以在下一個新 channel 上提供另一個 consumer。
我該如何與傳入的連接和資料做關聯?¶
Channels 提供 WebSocket 與 Django session 和認證系統完整的支援,以及用於保存資料的每個 WebSocket 會話,因此您可以輕鬆地在每個連接或每個用戶的基礎上保存數據或是資料。
假使你願意,也可以提供自己的解決方案,鍵入 message.reply_channel
,這代表連接的唯一 channel,但請記住,無論你存儲在哪裡,都必須是 network-transparent - 存儲物在全域變數中不會在開發之外使用。
如何讓非 Django 應用與 Channels 進行通話?¶
假使你有一個外部服務器或是腳本想與 Channels 溝通,你可以有一個選擇:
- If it’s a Python program, and you’ve made an
asgi.py
file for your project (see 部署), you can import the channel layer directly asyourproject.asgi.channel_layer
and callsend()
andreceive_many()
on it directly. See the ASGI spec for the API the channel layer presents. - If you just need to send messages in when events happen, you can make a
management command that calls
Channel("namehere").send({...})
so your external program can just callmanage.py send_custom_event
(or similar) to send a message. Remember, you can send onto channels from any code in your project. - If neither of these work, you’ll have to communicate with Django over HTTP, WebSocket, or another protocol that your project talks, as normal.
Channels 是否支援 Python 2, 3 或是 2+3?¶
Django-channels 及其所有相依套件需要與 Python 2.7, 3.4 及更高版本才能相容。這包括Twisted 的部分 Channels 套件(如 daohne)使用的部分。
為何不支持 socket.io/SockJS/long poll fallback?¶
通過 HTTP 長輪詢模擬 WebSocket 比終止 WebSocket 需要更多的功夫; 連接的某些服務器端狀態必須保存在可從所有節點訪問的位置,因此當新的長輪詢進入時,可以將訊息重播給它。
出於這個原因,我們認為它不在 Channels 本身的範圍內,儘管 Channels 和 Daphne 為長時間運行的 HTTP 連接提供了一流的支援,且不佔用工作線程(您可以使用 “http.request” 而不發送任何回應直到最後,將回覆 channel 添加到群組,甚至可以聽取 “http.disconnect” 頻道,告訴你什麼時候長時間 polls 提前結束)。
ASGI (異步伺服器閘道介面) 規劃草案¶
備註
仍在開發中,但是目前幾乎完成。
摘要¶
該文檔旨在描述一個介於網絡協議服務 (尤其是 web 服務) 和 Python 應用之間的標準接口,能夠處理多種通用協議類型,(包括 HTTP、HTTP2 和 WebSocket)。
This base specification is intended to fix in place the set of APIs by which these servers interact and the guarantees and style of message delivery; each supported protocol (such as HTTP) has a sub-specification that outlines how to encode and decode that protocol into messages.
The set of sub-specifications is available in the Message Formats section.
依據¶
WSGI 規範自誕生以來應用廣泛,在作為Python框架和web 服務的選擇上擁有非常好的靈活性。但,因為是針對HTTP風格的請求響應模型做的設計,加上越來越多不遵循這種模式的協議逐漸成為web編程的標準之一,(比如說,WebSocket)。所以需要新的改變。
ASGI 嘗試保持在一個簡單的應用接口的前提下,提供允許數據能夠在任意時候、被任意應用進程發送和接受的抽象。
它還採用將協議轉換為 Python 兼容,異步友好的消息集並將其概括為兩部分的原則。 一個標準化的通信接口和周圍的服務器(本文檔)和一套標準 message formats for each protocol 1。
Its primary goal is to provide a way to write HTTP/2 and WebSocket code, alongside normal HTTP handling code, however, and part of this design is ensuring there is an easy path to use both existing WSGI servers and applications, as a large majority of Python web usage relies on WSGI and providing an easy path forwards is critical to adoption. Details on that interoperability are covered in /asgi/www.
The end result of this process has been a specification for generalised inter-process communication between Python processes, with a certain set of guarantees and delivery styles that make it suited to low-latency protocol processing and response. It is not intended to replace things like traditional task queues, but it is intended that it could be used for things like distributed systems communication, or as the backbone of a service-oriented architecure for inter-service communication.
總覽¶
ASGI 由三個不同的元件構成:protocol servers、channel layer 與 application code。Channel layers 是這個實現中最重要的部分,它能同時對 protocol servers 和 applications 提供接口。
一個 channel layer 對 protocol server 或 一個 application server 提供一個 send
的可呼叫方法,該方法接受 channel name、message dict
以及一個 receive
的呼叫方法。他會獲取 channel names 的 list 並返回指定頻道的下一條可用的消息。
所以,相較於在 WSGI 上,我們將 protocol server 直接指向 application,在 ASGI 裡,我們將 protocol server 和 application 同時指向一個 channel layer 的實例。它的目的是讓 applications 和 protocol servers 總是運行在不同的進程或者線程中,並通過 channel layer 進行通信。
ASGI tries to be as compatible as possible by default, and so the only
implementation of receive
that must be provided is a fully-synchronous,
nonblocking one. Implementations can then choose to implement a blocking mode
in this method, and if they wish to go further, versions compatible with
the asyncio or Twisted frameworks (or other frameworks that may become
popular, thanks to the extension declaration mechanism).
該文件中對 protocol servers 和 applications 的區分主要是為了明確彼此要扮演的角色,同時也為了更容易的描述概念。兩者之間並沒有 code-level 的區別,且完全有可能建立一個可以在兩個不同的 channel layer 或 channel name 之間轉換消息,類似 middleware-like 的程式碼。預計大部分部署將會採取這種模式。
There is even room for a WSGI-like application abstraction on the application
server side, with a callable which takes (channel, message, send_func)
,
but this would be slightly too restrictive for many use cases and does not
cover how to specify channel names to listen on. It is expected that
frameworks will cover this use case.
頻道和訊息¶
在 ASGI stack 裡面的所有溝通都是透過在 channels 裡發送 message 進行的。所有的訊息必須是 object 最頂層的 dict
,並且為了保證可序列化,只允許包含以下類型數據:
Byte 字串
Unicode 字串
- Integers (within the signed 64 bit range)
- Floating point numbers (within the IEEE 754 double precision range)
列表(元組會被視為列表)
字典(鍵必須是 Unicode)
布林
- None
Channel 的 ID 只能由 ASCII 字母、數字及 periods(.
)、dashes(-
)、underscores(_
),以及一個可選的字符構成(見下文)。
Channels 是一個先進先出佇列,佇列裡的項最多被傳輸一次。它允許多位寫入者和多位讀取者,當僅有一位讀取者時,需要讀取每一個寫入的消息。實現絕對不能將一條消息傳輸多次或傳輸給一位以上讀取者,為了保證這一限制,必要時必須清空所有信息。
In order to aid with scaling and network architecture, a distinction
is made between channels that have multiple readers (such as the
http.request
channel that web applications would listen on from every
application worker process), single-reader channels that are read from a
single unknown location (such as http.request.body?ABCDEF
), and
process-specific channels (such as a http.response.A1B2C3!D4E5F6
channel
tied to a client socket).
Normal channel names contain no type characters, and can be routed however
the backend wishes; in particular, they do not have to appear globally
consistent, and backends may shard their contents out to different servers
so that a querying client only sees some portion of the messages. Calling
receive
on these channels does not guarantee that you will get the
messages in order or that you will get anything if the channel is non-empty.
Single-reader channel names contain a question mark
(?
) character in order to indicate to the channel layer that it must make
these channels appear globally consistent. The ?
is always preceded by
the main channel name (e.g. http.response.body
) and followed by a
random portion. Channel layers may use the random portion to help pin the
channel to a server, but reads from this channel by a single process must
always be in-order and return messages if the channel is non-empty. These names
must be generated by the new_channel
call.
Process-specific channel names contain an exclamation mark (!
) that
separates a remote and local part. These channels are received differently;
only the name up to and including the !
character is passed to the
receive()
call, and it will receive any message on any channel with that
prefix. This allows a process, such as a HTTP terminator, to listen on a single
process-specific channel, and then distribute incoming requests to the
appropriate client sockets using the local part (the part after the !
).
The local parts must be generated and managed by the process that consumes them.
These channels, like single-reader channels, are guaranteed to give any extant
messages in order if received from a single process.
訊息如果在一個 channel 裡超過設定時間未讀會過期。這個設定時間推薦為一分鐘,當然最佳的設定還是取決於 channel layer 以及它部署的方式。
The maximum message size is 1MB if the message were encoded as JSON; if more data than this needs to be transmitted it must be chunked or placed onto its own single-reader or process-specific channel (see how HTTP request bodies are done, for example). All channel layers must support messages up to this size, but protocol specifications are encouraged to keep well below it.
處理協議¶
ASGI 訊息主要有兩類,內部應用事件(例如,一個 channel 可能使用 queue 將之前上傳的視訊進行縮圖),以及來自/連接客戶端的協議事件。
As such, there are sub-specifications that outline encodings to and from ASGI messages for common protocols like HTTP and WebSocket; in particular, the HTTP one covers the WSGI/ASGI interoperability. It is recommended that if a protocol becomes commonplace, it should gain standardized formats in a sub-specification of its own.
The message formats are a key part of the specification; without them,
the protocol server and web application might be able to talk to each other,
but may not understand some of what the other is saying. It’s equivalent to the
standard keys in the environ
dict for WSGI.
The design pattern is that most protocols will share a few channels for
incoming data (for example, http.request
, websocket.connect
and
websocket.receive
), but will have individual channels for sending to
each client (such as http.response!kj2daj23
). This allows incoming
data to be dispatched into a cluster of application servers that can all
handle it, while responses are routed to the individual protocol server
that has the other end of the client’s socket.
Some protocols, however, do not have the concept of a unique socket
connection; for example, an SMS gateway protocol server might just have
sms.receive
and sms.send
, and the protocol server cluster would
take messages from sms.send
and route them into the normal phone
network based on attributes in the message (in this case, a telephone
number).
Extensions¶
Extensions are functionality that is not required for basic application code and nearly all protocol server code, and so has been made optional in order to enable lightweight channel layers for applications that don’t need the full feature set defined here.
此處擴充定義為:
groups
: Allows grouping of channels to allow broadcast; see below for more.flush
: Allows easier testing and development with channel layers.statistics
: Allows channel layers to provide global and per-channel statistics.twisted
: Async compatibility with the Twisted framework.asyncio
: Async compatibility with Python 3’s asyncio.
There is potential to add further extensions; these may be defined by a separate specification, or a new version of this specification.
If application code requires an extension, it should check for it as soon as possible, and hard error if it is not provided. Frameworks should encourage optional use of extensions, while attempting to move any extension-not-found errors to process startup rather than message handling.
Groups¶
While the basic channel model is sufficient to handle basic application needs, many more advanced uses of asynchronous messaging require notifying many users at once when an event occurs - imagine a live blog, for example, where every viewer should get a long poll response or WebSocket packet when a new entry is posted.
This concept could be kept external to the ASGI spec, and would be, if it
were not for the significant performance gains a channel layer implementation
could make on the send-group operation by having it included - the
alternative being a send_many
callable that might have to take
tens of thousands of destination channel names in a single call. However,
the group feature is still optional; its presence is indicated by the
supports_groups
attribute on the channel layer object.
Thus, there is a simple Group concept in ASGI, which acts as the broadcast/multicast mechanism across channels. Channels are added to a group, and then messages sent to that group are sent to all members of the group. Channels can be removed from a group manually (e.g. based on a disconnect event), and the channel layer will garbage collect “old” channels in groups on a periodic basis.
How this garbage collection happens is not specified here, as it depends on the internal implementation of the channel layer. The recommended approach, however, is when a message on a process-specific channel expires, the channel layer should remove that channel from all groups it’s currently a member of; this is deemed an acceptable indication that the channel’s listener is gone.
Implementation of the group functionality is optional. If it is not provided and an application or protocol server requires it, they should hard error and exit with an appropriate error message. It is expected that protocol servers will not need to use groups.
線性化¶
ASGI 的設計目的為在實現無共享架構下,其中訊息可以藉由一組 threads,processes 或機器其中的任何一個來處理運行的應用程序代碼。
This, of course, means that several different copies of the application could be handling messages simultaneously, and those messages could even be from the same client; in the worst case, two packets from a client could even be processed out-of-order if one server is slower than another.
This is an existing issue with things like WSGI as well - a user could open two different tabs to the same site at once and launch simultaneous requests to different servers - but the nature of the new protocols specified here mean that collisions are more likely to occur.
Solving this issue is left to frameworks and application code; there are already solutions such as database transactions that help solve this, and the vast majority of application code will not need to deal with this problem. If ordering of incoming packets matters for a protocol, they should be annotated with a packet number (as WebSocket is in its specification).
Single-reader and process-specific channels, such as those used for response channels back to clients, are not subject to this problem; a single reader on these must always receive messages in channel order.
容量¶
To provide backpressure, each channel in a channel layer may have a capacity, defined however the layer wishes (it is recommended that it is configurable by the user using keyword arguments to the channel layer constructor, and furthermore configurable per channel name or name prefix).
When a channel is at or over capacity, trying to send() to that channel may raise ChannelFull, which indicates to the sender the channel is over capacity. How the sender wishes to deal with this will depend on context; for example, a web application trying to send a response body will likely wait until it empties out again, while a HTTP interface server trying to send in a request would drop the request and return a 503 error.
Process-local channels must apply their capacity on the non-local part (that is,
up to and including the !
character), and so capacity is shared among all
of the “virtual” channels inside it.
Sending to a group never raises ChannelFull; instead, it must silently drop the message if it is over capacity, as per ASGI’s at-most-once delivery policy.
規格明細¶
一個 路徑圖層 必須有個物件具有這些屬性(所有函數參數都有方位性):
send(channel, message)
, a callable that takes two arguments: the channel to send on, as a unicode string, and the message to send, as a serializabledict
.receive(channels, block=False)
, a callable that takes a list of channel names as unicode strings, and returns with either(None, None)
or(channel, message)
if a message is available. Ifblock
is True, then it will not return a message arrives (or optionally, a built-in timeout, but it is valid to block forever if there are no messages); ifblock
is false, it will always return immediately. It is perfectly valid to ignoreblock
and always return immediately, or after a delay;block
means that the call can take as long as it likes before returning a message or nothing, not that it must block until it gets one.new_channel(pattern)
, a callable that takes a unicode string pattern, and returns a new valid channel name that does not already exist, by adding a unicode string after the!
or?
character inpattern
, and checking for existence of that name in the channel layer. Thepattern
must end with!
or?
or this function must error. If the character is!
, making it a process-specific channel,new_channel
must be called on the same channel layer that intends to read the channel withreceive
; any other channel layer instance may not receive messages on this channel due to client-routing portions of the appended string.MessageTooLarge
, the exception raised when a send operation fails because the encoded message is over the layer’s size limit.ChannelFull
, the exception raised when a send operation fails because the destination channel is over capacity.extensions
, a list of unicode string names indicating which extensions this layer provides, or an empty list if it supports none. The possible extensions can be seen in Extensions.
A channel layer implementing the groups
extension must also provide:
group_add(group, channel)
, a callable that takes achannel
and adds it to the group given bygroup
. Both are unicode strings. If the channel is already in the group, the function should return normally.group_discard(group, channel)
, a callable that removes thechannel
from thegroup
if it is in it, and does nothing otherwise.group_channels(group)
, a callable that returns an iterable which yields all of the group’s member channel names. The return value should be serializable with regards to local adds and discards, but best-effort with regards to adds and discards on other nodes.send_group(group, message)
, a callable that takes two positional arguments; the group to send to, as a unicode string, and the message to send, as a serializabledict
. It may raise MessageTooLarge but cannot raise ChannelFull.group_expiry
, an integer number of seconds that specifies how long group membership is valid for after the most recentgroup_add
call (see Persistence below)
路徑層架構的 統計
擴充功能需具備:
global_statistics()
, a callable that returns statistics across all channelschannel_statistics(channel)
, a callable that returns statistics for specified channel- in both cases statistics are a dict with zero or more of (unicode string keys):
messages_count
, the number of messages processed since server startmessages_count_per_second
, the number of messages processed in the last secondmessages_pending
, the current number of messages waitingmessages_max_age
, how long the oldest message has been waiting, in secondschannel_full_count
, the number of times ChannelFull exception has been risen since server startchannel_full_count_per_second
, the number of times ChannelFull exception has been risen in the last second
執行可提供次數總和、每秒次數,或是兩者皆提供。
A channel layer implementing the flush
extension must also provide:
flush()
, a callable that resets the channel layer to a blank state, containing no messages and no groups (if the groups extension is implemented). This call must block until the system is cleared and will consistently look empty to any client, if the channel layer is distributed.
A channel layer implementing the twisted
extension must also provide:
receive_twisted(channels)
, a function that behaves likereceive
but that returns a Twisted Deferred that eventually returns either(channel, message)
or(None, None)
. It is not possible to run it in nonblocking mode; use the normalreceive
for that.
A channel layer implementing the async
extension must also provide:
receive_async(channels)
, a function that behaves likereceive
but that fulfills the asyncio coroutine contract to block until either a result is available or an internal timeout is reached and(None, None)
is returned. It is not possible to run it in nonblocking mode; use the normalreceive
for that.
Channel 語義¶
Channels must:
- Preserve ordering of messages perfectly with only a single reader and writer if the channel is a single-reader or process-specific channel.
請勿重覆傳送相同訊息超過一次。
- Never block on message send (though they may raise ChannelFull or MessageTooLarge)
- Be able to handle messages of at least 1MB in size when encoded as JSON (the implementation may use better encoding or compression, as long as it meets the equivalent size)
最長名稱的長度為 100 bytes。
They should attempt to preserve ordering in all cases as much as possible, but perfect global ordering is obviously not possible in the distributed case.
They are not expected to deliver all messages, but a success rate of at least 99.99% is expected under normal circumstances. Implementations may want to have a “resilience testing” mode where they deliberately drop more messages than usual so developers can test their code’s handling of these scenarios.
Persistence¶
Channel layers do not need to persist data long-term; group memberships only need to live as long as a connection does, and messages only as long as the message expiry time, which is usually a couple of minutes.
That said, if a channel server goes down momentarily and loses all data, persistent socket connections will continue to transfer incoming data and send out new generated data, but will have lost all of their group memberships and in-flight messages.
In order to avoid a nasty set of bugs caused by these half-deleted sockets, protocol servers should quit and hard restart if they detect that the channel layer has gone down or lost data; shedding all existing connections and letting clients reconnect will immediately resolve the problem.
If a channel layer implements the groups
extension, it must persist group
membership until at least the time when the member channel has a message
expire due to non-consumption, after which it may drop membership at any time.
If a channel subsequently has a successful delivery, the channel layer must
then not drop group membership until another message expires on that channel.
Channel layers must also drop group membership after a configurable long timeout
after the most recent group_add
call for that membership, the default being
86,400 seconds (one day). The value of this timeout is exposed as the
group_expiry
property on the channel layer.
Protocol servers must have a configurable timeout value for every connection-based
protocol they serve that closes the connection after the timeout, and should
default this value to the value of group_expiry
, if the channel
layer provides it. This allows old group memberships to be cleaned up safely,
knowing that after the group expiry the original connection must have closed,
or is about to be in the next few seconds.
It’s recommended that end developers put the timeout setting much lower - on the order of hours or minutes - to enable better protocol design and testing. Even with ASGI’s separation of protocol server restart from business logic restart, you will likely need to move and reprovision protocol servers, and making sure your code can cope with this is important.
訊息格式¶
These describe the standardized message formats for the protocols this
specification supports. All messages are dicts
at the top level,
and all keys are required unless explicitly marked as optional. If a key is
marked optional, a default value is specified, which is to be assumed if
the key is missing. Keys are unicode strings.
The one common key across all protocols is reply_channel
, a way to indicate
the client-specific channel to send responses to. Protocols are generally
encouraged to have one message type and one reply channel type to ensure ordering.
A reply_channel
should be unique per connection. If the protocol in question
can have any server service a response - e.g. a theoretical SMS protocol - it
should not have reply_channel
attributes on messages, but instead a separate
top-level outgoing channel.
Messages are specified here along with the channel names they are expected
on; if a channel name can vary, such as with reply channels, the varying
portion will be represented by !
, such as http.response!
, which matches
the format the new_channel
callable takes.
There is no label on message types to say what they are; their type is implicit in the channel name they are received on. Two types that are sent on the same channel, such as HTTP responses and response chunks, are distinguished apart by their required fields.
Message formats can be found in the sub-specifications:
協議格式指南¶
Message formats for protocols should follow these rules, unless a very good performance or implementation reason is present:
reply_channel
should be unique per logical connection, and not per logical client.- If the protocol has server-side state, entirely encapsulate that state in the protocol server; do not require the message consumers to use an external state store.
- If the protocol has low-level negotiation, keepalive or other features, handle these within the protocol server and don’t expose them in ASGI messages.
- If the protocol has guaranteed ordering and does not use a specific channel
for a given connection (as HTTP does for body data), ASGI messages should
include an
order
field (0-indexed) that preserves the ordering as received by the protocol server (or as sent by the client, if available). This ordering should span all message types emitted by the client - for example, a connect message might have order0
, and the first two frames order1
and2
. - If the protocol is datagram-based, one datagram should equal one ASGI message (unless size is an issue)
近似於全球訂購¶
While maintaining true global (across-channels) ordering of messages is entirely unreasonable to expect of many implementations, they should strive to prevent busy channels from overpowering quiet channels.
For example, imagine two channels, busy
, which spikes to 1000 messages a
second, and quiet
, which gets one message a second. There’s a single
consumer running receive(['busy', 'quiet'])
which can handle
around 200 messages a second.
In a simplistic for-loop implementation, the channel layer might always check
busy
first; it always has messages available, and so the consumer never
even gets to see a message from quiet
, even if it was sent with the
first batch of busy
messages.
A simple way to solve this is to randomize the order of the channel list when looking for messages inside the channel layer; other, better methods are also available, but whatever is chosen, it should try to avoid a scenario where a message doesn’t get received purely because another channel is busy.
字串和 Unicode¶
In this document, and all sub-specifications, byte string refers to
str
on Python 2 and bytes
on Python 3. If this type still supports
Unicode codepoints due to the underlying implementation, then any values
should be kept within the 0 - 255 range.
Unicode string refers to unicode
on Python 2 and str
on Python 3.
This document will never specify just string - all strings are one of the
two exact types.
Some serializers, such as json
, cannot differentiate between byte
strings and unicode strings; these should include logic to box one type as
the other (for example, encoding byte strings as base64 unicode strings with
a preceding special character, e.g. U+FFFF).
路徑和群組名稱皆為Unicode字串,且額外限制只使用以下文字:
ASCII字母
字元從”0”到”9”
連字號”-“
底線”_”
句號”.”
問號”?”(僅限於單讀者路徑名稱,且每個名稱限用一次)
驚嘆號”!”(僅描述特定於進行中的路徑名稱,且每個名稱限用一次)
常見問題¶
Why are messages
dicts
, rather than a more advanced type?我們希望訊息方便編輯,特別在切換流程和機器介面,所以最好是一個簡單可編輯的類型。我們期望架構將包含每個特定協議訊息在各自定義類中(例如`http.request`訊息變`Request’物件)
版權¶
此文件已放於公有領域
社群專案¶
這些社群專案使用 Channels 系統再開發:
Djangobot,是一個與 Slack 聯繫的雙向介面伺服器。
knocker,是一個通用的桌面通知系統。
Beatserver,是一個 django channels 的週期工作排程系統。
cq,是一個簡單的分散式排程系統。
Debugpannel,是一個 channels 的除錯工具面板。
如果你想要增列您的專案,請發送包含專案連結與簡介的 PR 給我們。
貢獻¶
若你正在尋找Channels合作方式,請繼續閱讀-我們提倡各種合作任何規模,從新聞到經驗豐富的開發人員。
我可以運用在什麼?¶
我們正在尋找以下領域的幫助:
文件與教程書寫
錯誤修正與測試
- Feature polish and occasional new feature design
案例研究及書寫
You can find what we’re looking to work on in the GitHub issues list for each of the Channels sub-projects:
- Channels issues, for the Django integration and overall project efforts
- Daphne issues, for the HTTP and Websocket termination
- asgiref issues, for the base ASGI library/memory backend
- asgi_redis issues, for the Redis channel backend
- asgi_rabbitmq, for the RabbitMQ channel backend
- asgi_ipc issues, for the POSIX IPC channel backend
議題依層級分類:
exp/beginner
: Easy issues suitable for a first-time contributor.exp/intermediate
: Moderate issues that need skill and a day or two to solve.exp/advanced
: Difficult issues that require expertise and potentially weeks of work.
They are also classified by type:
documentation
: Documentation issues. Pick these if you want to help us by writing docs.bug
: A bug in existing code. Usually easier for beginners as there’s a defined thing to fix.enhancement
: A new feature for the code; may be a bit more open-ended.
You should filter the issues list by the experience level and type of work
you’d like to do, and then if you want to take something on leave a comment
and assign yourself to it. If you want advice about how to take on a bug,
leave a comment asking about it, or pop into the IRC channel at
#django-channels
on Freenode and we’ll be happy to help.
The issues are also just a suggested list - any offer to help is welcome as long as it fits the project goals, but you should make an issue for the thing you wish to do and discuss it first if it’s relatively large (but if you just found a small bug and want to fix it, sending us a pull request straight away is fine).
我是一個初階 合作/開發人員 - 我可以幫什麼?¶
Of course! The issues labelled with exp/beginner
are a perfect place to
get started, as they’re usually small and well defined. If you want help with
one of them, pop into the IRC channel at #django-channels
on Freenode or
get in touch with Andrew directly at andrew@aeracode.org.
你能支付我的時間嗎?¶
Thanks to Mozilla, we have a reasonable budget to pay people for their time
working on all of the above sorts of tasks and more. Generally, we’d prefer
to fund larger projects (you can find these labelled as epic-project
in the
issues lists) to reduce the administrative overhead, but we’re open to any
proposal.
If you’re interested in working on something and being paid, you’ll need to draw up a short proposal and get in touch with the committee, discuss the work and your history with open-source contribution (we strongly prefer that you have a proven track record on at least a few things) and the amount you’d like to be paid.
If you’re interested in working on one of these tasks, get in touch with Andrew Godwin (andrew@aeracode.org) as a first point of contact; he can help talk you through what’s involved, and help judge/refine your proposal before it goes to the committee.
Tasks not on any issues list can also be proposed; Andrew can help talk about them and if they would be sensible to do.
發行版本說明¶
1.0.0 發行公告¶
Channels 1.0.0 brings together a number of design changes, including some breaking changes, into our first fully stable release, and also brings the databinding code out of alpha phase. It was released on 2017/01/08.
The result is a faster, easier to use, and safer Channels, including one major change that will fix almost all problems with sessions and connect/receive ordering in a way that needs no persistent storage.
It was unfortunately not possible to make all of the changes backwards compatible, though most code should not be too affected and the fixes are generally quite easy.
You must also update Daphne to at least 1.0.0 to have this release of Channels work correctly.
主要特徵¶
Channels 1.0 introduces a couple of new major features.
WebSocket 接受/拒絕流量¶
Rather than be immediately accepted, WebSockets now pause during the handshake
while they send over a message on websocket.connect
, and your application
must either accept or reject the connection before the handshake is completed
and messages can be received.
You must update Daphne to at least 1.0.0 to make this work correctly.
這裡有幾項優點:
- You can now reject WebSockets before they even finish connecting, giving appropriate error codes to browsers and not letting the browser-side socket ever get into a connected state and send messages.
- Combined with Consumer Atomicity (below), it means there is no longer any need
for the old “slight ordering” mode, as the connect consumer must run to
completion and accept the socket before any messages can be received and
forwarded onto
websocket.receive
. - Any
send
message sent to the WebSocket will implicitly accept the connection, meaning only a limited set ofconnect
consumers need changes (see Backwards Incompatible Changes below)
Consumer Atomicity¶
Consumers will now buffer messages you try to send until the consumer completes and then send them once it exits and the outbound part of any decorators have been run (even if an exception is raised).
This makes the flow of messages much easier to reason about - consumers can now be reasoned about as atomic blocks that run and then send messages, meaning that if you send a message to start another consumer you’re guaranteed that the sending consumer has finished running by the time it’s acted upon.
If you want to send messages immediately rather than at the end of the consumer,
you can still do that by passing the immediately
argument:
Channel("thumbnailing-tasks").send({"id": 34245}, immediately=True)
This should be mostly backwards compatible, and may actually fix race conditions in some apps that were pre-existing.
Databinding Group/Action Overhaul¶
Previously, databinding subclasses had to implement
group_names(instance, action)
to return what groups to send an instance’s
change to of the type action
. This had flaws, most notably when what was
actually just a modification to the instance in question changed its
permission status so more clients could see it; to those clients, it should
instead have been “created”.
Now, Channels just calls group_names(instance)
, and you should return what
groups can see the instance at the current point in time given the instance
you were passed. Channels will actually call the method before and after changes,
comparing the groups you gave, and sending out create, update or delete messages
to clients appropriately.
Existing databinding code will need to be adapted; see the “Backwards Incompatible Changes” section for more.
Demultiplexer Overhaul¶
Demuliplexers have changed to remove the behaviour where they re-sent messages
onto new channels without special headers, and instead now correctly split out
incoming messages into sub-messages that still look like websocket.receive
messages, and directly dispatch these to the relevant consumer.
They also now forward all websocket.connect
and websocket.disconnect
messages to all of their sub-consumers, so it’s much easier to compose things
together from code that also works outside the context of multiplexing.
更多資訊,請詳閱更新文件:doc:/generic
次要修正¶
- Serializers can now specify fields as
__all__
to auto-include all fields, andexclude
to remove certain unwanted fields. runserver
respectsFORCE_SCRIPT_NAME
- Websockets can now be closed with a specific code by calling
close(status=4000)
enforce_ordering
no longer has aslight
mode (because of the accept flow changes), and is more efficient with session saving.runserver
respects--nothreading
and only launches one worker, takes a--http-timeout
option if you want to override it from the default60
,- A new
@channel_and_http_session
decorator rehydrates the HTTP session out of the channel session if you want to access it inside receive consumers. - Streaming responses no longer have a chance of being cached.
request.META['SERVER_PORT']
is now always a string.http.disconnect
now has apath
key so you can route it.- Test client now has a
send_and_consume
method.
Backwards Incompatible Changes¶
Connect Consumers¶
If you have a custom consumer for websocket.connect
, you must ensure that
it either:
- Sends at least one message onto the
reply_channel
that generates a WebSocket frame (eitherbytes
ortext
is set), either directly or via a group. - Sends a message onto the
reply_channel
that is{"accept": True}
, to accept a connection without sending data. - Sends a message onto the
reply_channel
that is{"close": True}
, to reject a connection mid-handshake.
Many consumers already do the former, but if your connect consumer does not send anything you MUST now send an accept message or the socket will remain in the handshaking phase forever and you’ll never get any messages.
All built-in Channels consumers (e.g. in the generic consumers) have been upgraded to do this.
You must update Daphne to at least 1.0.0 to make this work correctly.
Databinding group_names¶
If you have databinding subclasses, you will have implemented
group_names(instance, action)
, which returns the groups to use based on the
instance and action provided.
Now, instead, you must implement group_names(instance)
, which returns the
groups that can see the instance as it is presented for you; the action
results will be worked out for you. For example, if you want to only show
objects marked as “admin_only” to admins, and objects without it to everyone,
previously you would have done:
def group_names(self, instance, action):
if instance.admin_only:
return ["admins"]
else:
return ["admins", "non-admins"]
Because you did nothing based on the action
(and if you did, you would
have got incomplete messages, hence this design change), you can just change
the signature of the method like this:
def group_names(self, instance):
if instance.admin_only:
return ["admins"]
else:
return ["admins", "non-admins"]
Now, when an object is updated to have admin_only = True
, the clients
in the non-admins
group will get a delete
message, while those in
the admins
group will get an update
message.
Demultiplexers¶
Demultiplexers have changed from using a mapping
dict, which mapped stream
names to channels, to using a consumers
dict which maps stream names
directly to consumer classes.
You will have to convert over to using direct references to consumers, change
the name of the dict, and then you can remove any channel routing for the old
channels that were in mapping
from your routes.
Additionally, the Demultiplexer now forwards messages as they would look from
a direct connection, meaning that where you previously got a decoded object
through you will now get a correctly-formatted websocket.receive
message
through with the content as a text
key, JSON-encoded. You will also
now have to handle websocket.connect
and websocket.disconnect
messages.
Both of these issues can be solved using the JsonWebsocketConsumer
generic
consumer, which will decode for you and correctly separate connection and
disconnection handling into their own methods.
1.0.1 發布說明¶
Channels 1.0.1 是一個修正臭蟲的小更新版,於 2017/01/09 發佈。
修正內容¶
WebSocket 的通用 views 現在在 connect handler 預設接受連線,可以有比較好的向下相容性。
向下不相容的修正¶
無
1.0.2 Release Notes¶
Channels 1.0.2 is a minor bugfix release, released on 2017/01/12.
Changes¶
- Websockets can now be closed from anywhere using the new
WebsocketCloseException
, available aschannels.exceptions.WebsocketCloseException(code=None)
. There is also a genericChannelSocketException
you can base any exceptions on that, if it is caught, gets handed the currentmessage
in arun
method, so you can do custom behaviours. - Calling
Channel.send
orGroup.send
from outside a consumer context (i.e. in tests or management commands) will once again send the message immediately, rather than putting it into the consumer message buffer to be flushed when the consumer ends (which never happens) - The base implementation of databinding now correctly only calls
group_names(instance)
, as documented.
Backwards Incompatible Changes¶
None
1.0.3 Release Notes¶
Channels 1.0.3 is a minor bugfix release, released on 2017/02/01.
Changes¶
資料庫連結已不會在每次測試完後強制關閉。
- Channel sessions are not re-saved if they’re empty even if they’re marked as modified, allowing logout to work correctly.
- WebsocketDemultiplexer now correctly does sessions for the second/third/etc. connect and disconnect handlers.
- Request reading timeouts now correctly return 408 rather than erroring out.
- The
rundelay
delay server now only polls the database once per second, and this interval is configurable with the--sleep
option.
Backwards Incompatible Changes¶
無
1.1.0 Release Notes¶
Channels 1.1.0 introduces a couple of major but backwards-compatible changes, including most notably the inclusion of a standard, framework-agnostic JavaScript library for easier integration with your site.
主要變化¶
- Channels now includes a JavaScript wrapper that wraps reconnection and multiplexing for you on the client side. For more on how to use it, see the 通道 WebSocket 包裝 documentation.
- Test classes have been moved from
channels.tests
tochannels.test
to better match Django. Old imports fromchannels.tests
will continue to work but will trigger a deprecation warning, andchannels.tests
will be removed completely in version 1.3.
Minor Changes & Bugfixes¶
- Bindings now support non-integer fields for primary keys on models.
- The
enforce_ordering
decorator no longer suffers a race condition where it would drop messages under high load. runserver
no longer errors if thestaticfiles
app is not enabled in Django.
Backwards Incompatible Changes¶
None
1.1.1 Release Notes¶
Channels 1.1.1 is a bugfix release that fixes a packaging issue with the JavaScript files.
Major Changes¶
None.
Minor Changes & Bugfixes¶
- The JavaScript binding introduced in 1.1.0 is now correctly packaged and included in builds.
Backwards Incompatible Changes¶
None.
1.1.2 Release Notes¶
Channels 1.1.2 is a bugfix release for the 1.1 series, released on April 1st, 2017.
Major Changes¶
None.
Minor Changes & Bugfixes¶
- Session name hash changed to SHA-1 to satisfy FIPS-140-2.
- scheme key in ASGI-HTTP messages now translates into request.is_secure() correctly.
- WebsocketBridge now exposes the underlying WebSocket as .socket.
Backwards Incompatible Changes¶
- When you upgrade all current channel sessions will be invalidated; you should make sure you disconnect all WebSockets during upgrade.
1.1.3 Release Notes¶
Channels 1.1.3 is a bugfix release for the 1.1 series, released on April 5th, 2017.
Major Changes¶
None.
Minor Changes & Bugfixes¶
enforce_ordering
now works correctly with the new-style process-specific channels- ASGI channel layer versions are now explicitly checked for version compatability
Backwards Incompatible Changes¶
None.
1.1.4 Release Notes¶
Channels 1.1.4 is a bugfix release for the 1.1 series, released on June 15th, 2017.
Major Changes¶
None.
Minor Changes & Bugfixes¶
- Pending messages correctly handle retries in backlog situations
- Workers in threading mode now respond to ctrl-C and gracefully exit.
request.meta['QUERY_STRING']
is now correctly encoded at all times.- Test client improvements
ChannelServerLiveTestCase
added, allows an equivalent of the DjangoLiveTestCase
.- Decorator added to check
Origin
headers (allowed_hosts_only
) - New
TEST_CONFIG
setting inCHANNEL_LAYERS
that allows varying of the channel layer for tests (e.g. using a different Redis install)
Backwards Incompatible Changes¶
None.