Write Thread bug found and fixed

Sometimes, when using threads, an error raises from time to time (but is very difficult to see):

Traceback (most recent call last):
  File "/usr/local/lib/python2.6/dist-packages/bjsonrpc/proxies.py", line 43, in function
    return self._conn.proxy(self.sync_type, name, args, kwargs)
  File "/usr/local/lib/python2.6/dist-packages/bjsonrpc/connection.py", line 608, in proxy
    req = Request(self, data)
  File "/usr/local/lib/python2.6/dist-packages/bjsonrpc/request.py", line 68, in __init__
  File "/usr/local/lib/python2.6/dist-packages/bjsonrpc/connection.py", line 285, in addrequest
    assert(request.request_id not in self._requests)

Taking a look onto the code, i’ve found that the id-generation function allows to be executed concurrently, allowing to return duplicates. I added a bugfix which ensure only one call is given at a time. I hope this is enough to avoid the bug.


Removing net delays when sending data

I noticed weeks ago in my projects that there was happening something odd in bjsonrpc when using method proxies through laggy networks. Method proxies are meant to virtually remove the network lag by not having to wait to receive the data to continue. And they work quite well, but, in some cases (that are so common) I noticed that method calls were waiting for something. And after doing a research today i found i was right: it was waiting. In fact, the definition of “method” proxies are very exact about the problem: it doesn’t wait for receiving data, but it waits until knows that the other end thas received the data sent.

In my Git Devel branch I decided to try some minor changes that should address the problem, but may create new types of problems. First of all, there was a ‘socket’ lock to prevent the socket to be used concurrently, now this restriction is removed for sending. There was already a lock for avoiding concurrent send commands, so now this is ok. And also, i noticed that send() will wait. It’s a TCP queue, so yes, you have to wait until the other end says that has receivd the message. And in a protocol like json-rpc this is very important, we have to be sure that the message has been delivered without problems.

I did a simple bugfix, i created a new thread for each write call, and i’m letting the locks resolve the problem. While this solves the problem, this creates a new thread per concurrent write, which is not efficient and not necessary at all. I have to see how to fix this in the future (may be for the next release), i beleive the best is to use only a thread, and queue all calls. But i know this approach is difficult and I’m not going to solve it right now. And more problems, this creates a new type of parallelism, which is synonym of more problems or more locks to be created. I’m going to test this code and see what happens.

For the moment, seems that the issue is corrected in the devel branch, and now is a lot faster than before with slow networks.

The problem I noticed here is that in fast networks is slower than before (because the cost of creating a new thread per call)

I should take a look to this later.

EDIT: Finally i managed to correct the problem and is using a dedicated thread with semaphores and a queue. Is not as fast as before, but has a similar performance.

New release 0.2.0

Hey, was a long time ago, some months from now on, but now it’s the time to publish a new release of bjsonrpc. In this new version, several improvements were made:

  • Corrected connection timeouts for better connection stability over Wireless
  • Added a shutdown method to handlers to make possible the customization of the closing process of the object.
  • Better error reporting when adding the function twice to the handler or when something goes wrong in json.dumps
  • Added a threading option to server and client.
  • Added some locks that makes possible the use of bjsonrpc with threads.
With these changes and with the latest test i have done with bjsonrpc, I could say that bjsonrpc is almost stable.
Download it at: