backgroud
previously, we designed a logger module from ws to db/, this blog is one real implementation of the python solution.
high concurrent ws server online
single server with multi clients: a simple C++
the server is started first, and wait for the incoming client calls, periodically report its status: how many clients keep connected. meanwhile, once an incoming call is detected and accepted, the server will create a separate thread to handle this client. therefore, the server creates as many separate sessions as there are incoming clients.
how handle multiple clients? once an incoming client call is received and accepted, the main server thread will create a new thread, and pass the client connection to this thread
what if client threads need access some global variables? a semaphore instance ish helpful.
how ws server handles multiple incomming connections
socket object is often thought to represent a connection, but not entirely true, since they can be active or passive. a socket object in passive/listen mode is created by listen() for incoming connection requets. by definition, such a socket is not a connection, it just listen for conenction requets.
accept() doesn’t change the state of the passive socket created by listen() previously, but it returns an active/connected socket, which represents a real conenction. after accept() has returned the connected socket object, it can be called again on the passive socket, and again and again. or known as
accept loop
But call
accept()
takes time, can’t it miss incoming conenction requests? it won’t, there is a queue of pending connection requests, it is handled automatically by TCP/IP stack of the OS. meaning, whileaccept()
can only deal with incoming connection request one-by-one, no incoming request will be missed even when they are incoming at a high rate.
python env setup
websockets
module requires python3.6, the python version in bare OS is 3.5, which gives:
|
|
the Retry error fixed by adding the following lines to ~/.pip/pip.conf
|
|
which gives another error:
|
|
which then be fixed by go to /var/lib/dpkg/info/
and delete all python3-websocket.*
files:
|
|
everything looks good, but still report:
|
|
Gave up setting up with the bare python, then create a new conda env, and ran the following settings inside, clean and simple:
|
|
during remote test, if ws server down unexpected, need kill the ws pid:
|
|
in ws-client src, we see:
on_open: callable object which is called at opening websocket. this function has one argument. The argument is this class object. but all customized callback func can add more arguments, which is helpful.
on_message: callable object which is called when received data. on_message has 2 arguments. The 1st argument is this class object. the 2nd argument is utf-8 string which we get from the server.
we can implement a simple sqlalchemy orm db-writer, and add to the ws-server:
async def process(self, websocket, path):
raw_ = await websocket.recv()
jdata = json.loads(raw_)
orm_obj = orm_(jdata)
try:
self.dbwriter_.write(orm_obj)
print(jdata, "write to db successfully")
except Exception as e:
dbwriter_.rollback()
print(e)
greeting = "hello from server"
await websocket.send(greeting)
print(f"> {greeting}")
def run(self):
if self.host and self.port :
start_server = websockets.serve(self.process, self.host, self.port)
else:
start_server = websockets.serve(self.process, "localhost", 8867)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
in summary
in reality, each ws-client is integrated to one upper application, which generate messages/log, and send to ws-server, inside which write to db, due to asyncio
, the performance is good so far. in future, we maybe need some buffer at ws-server.
refer
a simple multi-client ws server