Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Redis and Node.js and Socket.io Questions

Redis and Node.js and Socket.io Questions

Problem

I have been just learning Redis and node.js There are two questions I have for which I couldn't find any satisfying answer.

My first question is about reusing redis clients within the node.js. I have found this question and answer: How to reuse redis connection in socket.io? , but it didn't satisfy me enough.

Now, if I create the redis client within the connection event, it will be spawned for each connection. So, if I have 20k concurrent users, there will be 20k redis clients.

If I put it outside of the connection event, it will be spawned only once.

The answer is saying that he creates three clients for each Function, outside of the connection event.

However, from what I know MySQL that when writing an application which spawns child processes and runs in parallel, you need to create your MySQL client within the function in which you are creating child instances. If you create it outside of it, MySQL will give an error of "MySQL server has gone away" as child processes will try to use the same connection. It should be created for each child processes separately.

So, even if you create three different redis clients for each function, if you have 30k concurrent users who send 2k messages concurrently, you should run into the same problem, right? So, every "user" should have their own redis client within the connection event. Am I right? If not, how node.js or redis handles concurrent requests, differently than MySQL? If it has its own mechanism and creates something like child processes within the redis client, why we need to create three different redis clients then? One should be enough.

I hope the question was clear.

-- UPDATE --

I have found an answer for the following question. http://howtonode.org/control-flow No need to answer but my first question is still valid.

-- UPDATE --

My second question is this. I am also not that good at JS and Node.js. So, from what I know, if you need to wait for an event, you need to encapsulate the second function within the first function. (I don't know the terminology yet). Let me give an example;

socket.on('startGame', function() {
    getUser();
    socket.get('game', function (gameErr, gameId) {
        socket.get('channel', function (channelErr, channel) {
            console.log(user);
            client.get('games:' + channel + '::' + gameId + ':owner', function (err, owner) { //games:channel.32:game.14
                if(owner === user.uid) {
   //do something
                }
            });
        }
    });
});

So, if I am learning it correctly, I need to run every function within the function if I need to wait I/O answer. Otherwise, node.js's non-blocking mechanism will allow the first function to run, in this case it will get the result in parallel, but the second function might not have the result if it takes time to get. So, if you are getting a result from redis for example, and you will use the result within the second function, you have to encapsulate it within the redis get function. Otherwise second function will run without getting the result.

So, in this case, if I need to run 7 different functions and the 8. function will need the result of all of them, do I need to write them like this, recursively? Or am I missing something.

I hope this was clear too.

Thanks a lot,

Problem courtesy of: Merinn

Solution

So, every "user" should have their own redis client within the connection event. Am I right?

Actually, you are not :)

The thing is that node.js is very unlike, for example, PHP. node.js does not spawn child processes on new connections, which is one of the main reasons it can easily handle large amounts of concurrent connections, including long-lived connections (Comet, Websockets, etc.). node.js processes events sequentially using an event queue within one single process. If you want to use several processes to take advantage of multi-core servers or multiple servers, you will have to do it manually (how to do so is beyond the scope of this question, though).

Therefore, it is a perfectly valid strategy to use one single Redis (or MySQL) connection to serve a large quantity of clients. This avoids the overhead of instantiating and terminating a database connection for each client request.

Solution courtesy of: Philippe Plantier

Discussion

View additional discussion.



This post first appeared on Node.js Recipes, please read the originial post: here

Share the post

Redis and Node.js and Socket.io Questions

×

Subscribe to Node.js Recipes

Get updates delivered right to your inbox!

Thank you for your subscription

×