Documentation updated for 0.2.x at PyPI

I reviewed all the domcunetation and i did minor changes (adding some functions that were undomcuneted, and correcting typos).

Hope you find more useful this new version of the docs. (In some point talks about a 0.2.1 that is yet unreleased, until it gets released, the code is on my GIT branch called “devel”)

PD: github pages are getting outdated and i’m planning to remove them… the documentation and the homepage there maybe will never get updated again.

Removing net delays when sending data

I noticed weeks ago in my projects that there was happening something odd in bjsonrpc when using method proxies through laggy networks. Method proxies are meant to virtually remove the network lag by not having to wait to receive the data to continue. And they work quite well, but, in some cases (that are so common) I noticed that method calls were waiting for something. And after doing a research today i found i was right: it was waiting. In fact, the definition of “method” proxies are very exact about the problem: it doesn’t wait for receiving data, but it waits until knows that the other end thas received the data sent.

In my Git Devel branch I decided to try some minor changes that should address the problem, but may create new types of problems. First of all, there was a ‘socket’ lock to prevent the socket to be used concurrently, now this restriction is removed for sending. There was already a lock for avoiding concurrent send commands, so now this is ok. And also, i noticed that send() will wait. It’s a TCP queue, so yes, you have to wait until the other end says that has receivd the message. And in a protocol like json-rpc this is very important, we have to be sure that the message has been delivered without problems.

I did a simple bugfix, i created a new thread for each write call, and i’m letting the locks resolve the problem. While this solves the problem, this creates a new thread per concurrent write, which is not efficient and not necessary at all. I have to see how to fix this in the future (may be for the next release), i beleive the best is to use only a thread, and queue all calls. But i know this approach is difficult and I’m not going to solve it right now. And more problems, this creates a new type of parallelism, which is synonym of more problems or more locks to be created. I’m going to test this code and see what happens.

For the moment, seems that the issue is corrected in the devel branch, and now is a lot faster than before with slow networks.

The problem I noticed here is that in fast networks is slower than before (because the cost of creating a new thread per call)

I should take a look to this later.

EDIT: Finally i managed to correct the problem and is using a dedicated thread with semaphores and a queue. Is not as fast as before, but has a similar performance.