Asynchronous Method Invocation - CORBA for impatient clients

Our simple client illustrated how to use the traditional CORBA synchronous method invocation to query a number of stock prices. Suppose that we had to query hundreds of stock prices, for example, during the initialization of a complex market analysis tool. In that case sending the requests in sequence is going to yield poor performance; we are not taking advantage of the natural parallelism in distributed systems, since we are waiting for the first response to come back before sending the next query. Traditionally this problem has been attacked using either oneway calls or multiple threads. Both approaches can work, but they have some disadvantages: multi-threading programming can be hard and error-prone, oneways are unreliable and require callback interfaces to return the stock value. Recently the OMG approved the CORBA Messaging specification that extends the basic invocation model to include asynchronous calls. Unlike the old deferred synchronous model, the new model uses the IDL compiler and the SII to achieve type safety and improved performance, but the application does not block waiting for a response. Instead, it gives the ORB a reference to a reply handler that will receive the response asynchronously. The specification also defines a polling interface that we will not discuss as TAO does not implement it.

For this problem we will extend the IDL interface to include a new operation:

  interface Single_Query_Stock : Stock {
    double get_price_and_names (out string symbol,
                               out string full_name);
  };

This will simplify some of the examples.

The first step is to generate the stubs and skeletons with support for the callback AMI. We do this with the -GC flag:

$ $TAO_ROOT/TAO_IDL/tao_idl -GC Quoter.idl

You may want to take a brief look at the generated client side interface. The IDL compiler adds new methods to the Quoter::Stock interface. In particular pay attention to this operation:

  virtual void sendc_get_price_and_names (
      AMI_Single_Query_StockHandler_ptr ami_handler
    );

This is the operation used to send a request asynchronously. The response is received in the handler object. This is a regular CORBA object with the following IDL interface:

interface AMI_Single_Query_StockHandler {
  void get_price_and_names (in double ami_return_val,
                           in string symbol,
                           in string full_name);
};

You don't have to write this IDL interface. The IDL compiler automatically generates the so-called implied IDL constructs from your original IDL. Notice how the arguments are generated. The first argument is simply the return value, then the output arguments show up, but as input only since the handler has to receive the reply.

Implementing the reply handler

We will have to implement a servant for this new IDL interface so we can receive the reply, exactly as we do for servers:

class Single_Query_Stock_Handler_i : public POA_Quoter::AMI_Single_Query_StockHandler
{
public:
  Single_Query_Stock_Handler_i (int *response_count)
    : response_count_ (response_count) {}

  void get_price_and_names (CORBA::Double ami_return_val,
                           const char *symbol,
                           const char *full_name)
  {
    std::cout << "The price of one stock in \""
              << full_name << "\" (" << symbol << ") is "
              << ami_return_val << std::endl;
    *this->response_count_++;
  }

private:
  int *response_count_;
};

The response_count_ field will be used to terminate the client when all the responses are received.

Sending asynchronous method invocations

The handler servant is activated as any other CORBA object:

    int response_count = 0;
    Single_Query_Stock_Handler_i handler_i (&response_count);
    Quoter::AMI_Single_Query_StockHandler_var handler =
      handler_i._this ();

and now we change the loop to send all the requests at once:

    int request_count = 0;
    for (int i = 2; i != argc; ++i) {
      try {
        // Get the stock object
        Quoter::Stock_var tmp =
          factory->get_stock (argv[i]);
        Quoter::Single_Query_Stock_var stock =
          Quoter::Single_Query_Stock::_narrow (tmp.in ());

        stock->sendc_get_price_and_names (handler.in ());
        request_count++;
      }

after the loop we wait until all the responses have arrived:

    while (response_count < request_count
           && orb->work_pending ()) {
      orb->perform_work ();
    }

Exercise 1

Complete the client.cpp file. Does this client play the server role too? If not, what is the role with respect to the handler servant? If you think it is a server too, what should you do about the POA?

You can use the following files to complete your implementation: the Quoter.idl, Handler_i.h, Handler_i.cpp. Remember that the simple client main program (located here) is a good start.

Solution

Look at client.cpp file. It should not be much different from yours.

Testing

A simple server is provided, based on the simple server from the introduction. As before, you need the following files: Stock_i.h, Stock_i.cpp, Stock_Factory_i.h Stock_Factory_i.cpp and server.cpp.

Configuration

So far we have used the default configuration in TAO, but AMI works better with some fine tuning. For example, by default TAO uses a separate connection for each outstanding request. With SMI this is a very good strategy, as separate threads can send concurrent requests without any shared resources, but this approach does not scale well with AMI, as it would create too many connections. The solution is to change the strategy to share connections. All we need to do is create a svc.conf file with the following contents:

static Client_Strategy_Factory "-ORBTransportMuxStrategy MUXED"

There are many other configuration options, all of them documented in Options.html, in configurations.html, and in the Developer's Guide available from OCI.


Carlos O'Ryan