Contents
This document describes a typical workflow using the OWSI in order to compose and execute a model on a remote OWSI server. See also OpenModeller Standard Web Services API The WSDL is available for your interest to be read in conjunction with this document.
Illustration
The following diagram is a useful reference for the discussion below. The original diagram is available in kivio format here : owsi_workflow_example.flw
Part 1: Model definition
- User selects a server to model model requests will be sent. For example in triana this may be presented as a combo box with a list of condor master nodes that support OWSI. It is up to the client application to manage these server listings itself. In the future we may create a registry of servers with an appropriate web services API.
ping () Once a server has been selected, the server is pinged to make sure it is available to take requests. A non response at this point would halt the workflow.
getLayers() This provides a complete list of all layers available on the server. Typically the user will choose a subset of these layers to create a 'model definition' layerset. Later when the model has been created he may use the same (or a different but semantically equivalent layerset) to project the model. The layerset returned from the server will include the directory heirachy from the 'basepath' on the remote server. This simple mechanism allows the user to identify logical groupins of layers and to use this knowledge to create a logically consistent layerset. An example layerset document follows:
<Environment Layers="10" Name="FooBar All Layers" Description="All data available on server FooBar"> <Map Filename='/present/Mean_daily_precipitation_in_coolest_month.tif' Categorical='0' /> <Map Filename='/present/Lowest_temperature_in_coolest_month.tif' Categorical='0' /> <Map Filename='/present/Highest_temperature_in_warmest_month.tif' Categorical='0' /> <Map Filename='/present/Annual_temperature_range.tif' Categorical='0' /> <Map Filename='/present/Mean_daily_precipitation.tif' Categorical='0' /> <Map Filename='/2050/Mean_daily_precipitation_in_coolest_month.tif' Categorical='0' /> <Map Filename='/2050/Lowest_temperature_in_coolest_month.tif' Categorical='0' /> <Map Filename='/2050/Highest_temperature_in_warmest_month.tif' Categorical='0' /> <Map Filename='/2050/Annual_temperature_range.tif' Categorical='0' /> <Map Filename='/2050/Mean_daily_precipitation.tif' Categorical='0' /> </Environment>
getAlgorithms() This provides a complete list of all algorithms along with their default parameters. The user will select on algorithm and optionally modify its parameters in order to create a custom parameterset profile for that algorithm. The algorithm definition along with its optionally customised parameterset profile is then used by the server implementing the OWSI to control the openModeller model execution process. I have provided a sample algorithmset.xml for your interest.
createModel(String) In our useage scenario above the user would select one of the algorithm names and the Algorithm element would be passed on to the createModel step below. In addition they would select any number of layers to comprise a new Environment element (hereafter referred to as a layerset). The layerset and algorithm profile are then passed back (combined into a single XML file) to the OWSI server where the model is to be run and the XML model definition returned.
getProgress(String) Now we intermitantly call getProgress, passing it the job id of the model we launched in the previous step. This is repeated until an exit code of -2 (aborted) or 100 (model completed) is returned.
getModel(String) Once the model is complete, the model generated by the given Job Id serialized as XML and returned to the client application. An example of a serialised model is provided here: Abarema_jupunba_model.xml
getLog(String) (not illustrated in above diagram) Additional information may also be retrieved from the OSWI server by calling the function. Text output from the openModeller process for this Job Id will be returned.
Part 2: Model projection
getLayers() This provides a complete list of all layers available on the server. Typically the user will choose a subset of these layers to create a 'model projection' layerset. The layers chosen by the user should be semantically equivalent to the layers used to create the model.
projectModel(String,String) First input is model definition as generated by create model, second input is an <Environment> document describing the climate scenario layerset into which the model should be projected. When the job is submitted to the OWSI, the Job Id is returned to the client application.
getProgress(String) Now we intermitantly call getProgress, passing it the job id of the model we launched in the previous step. This is repeated until an exit code of -2 (aborted) or 100 (model completed) is returned.
getMapUrl(String) After the projection is complete an image should have been created. The url for viewing the map generated by the given Job Id
getLog(String) (not illustrated in above diagram) Additional information may also be retrieved from the OSWI server by calling the function. Text output from the openModeller process for this Job Id will be returned.
Example program using soap bindings
Note: This example is still under construction:
#include <iostream>
#include "OpenModellerWrapperSoapBinding.nsmap"
#include "soapOpenModellerWrapperSoapBindingProxy.h"
int main(int argc, char *argv[])
{
std::cout << "hello world\n";
OpenModellerWrapperSoapBinding w;
std::string request, result;
//test the ping method
std::cout << "calling pingModel" << std::endl;
if (w.ns1__ping(result) == SOAP_OK)
{
std::cout << result << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
}
//test the create model method
std::cout << "calling createModel" << std::endl;
request = "dummy";
if (w.ns1__createModel(request, result) == SOAP_OK)
{
std::cout << result << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
// poll the ws until the job is done
std::cout << "calling getProgress until job is done" << std::endl;
// result from createModel call should be the jobid that we are waiting on for completion
std::string jobId = result;
float progress=-999.0; //undefined state!
// undefined queued completed
while (progress==-999.0 || progress==-1 || progress!=100)
{
if (w.ns1__getProgress(jobId, progress) == SOAP_OK)
{
std::cout << "Progress of Job " << jobId << " is " << progress << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
}
//get the log messages for this job
std::cout << "calling getLog" << std::endl;
if (w.ns1__getLog(jobId, result) == SOAP_OK)
{
std::cout << "Log of Job " << jobId << " is: " << std::endl;
std::cout << "-----------------------------------------------" << std::endl;
std::cout << result << std::endl;
std::cout << "-----------------------------------------------" << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
//check if the model failed / or aborted
if (progress=-2)
{
//model was aborted, so bail out
std::cout << "Model was aborted on server, quitting....." << std::endl;
}
// now that the model is done, we can get the model result
std::cout << "calling getModel" << std::endl;
std::string model;
if (w.ns1__getModel(jobId, model) == SOAP_OK)
{
std::cout << "Model of Job " << jobId << " is: " << std::endl;
std::cout << "-----------------------------------------------" << std::endl;
std::cout << model << std::endl;
std::cout << "-----------------------------------------------" << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
//test the projtect model method
std::cout << "calling projectModel " << std::endl;
request = result;
std::string envlayers = "<Environment></Environment>";
if (w.ns1__projectModel(model, envlayers, jobId) == SOAP_OK)
{
std::cout << result << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
// poll the ws until the job is done
// result from createModel call should be the jobid that we are waiting on for completion
// undefined queued completed
while (progress==-999.0 || progress==-1 || progress!=100)
{
if (w.ns1__getProgress(jobId, progress) == SOAP_OK)
{
std::cout << "Progress of Job " << jobId << " is " << progress << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
}
//get the log messages for this job
if (w.ns1__getLog(jobId, result) == SOAP_OK)
{
std::cout << "Log of Job " << jobId << " is: " << std::endl;
std::cout << "-----------------------------------------------" << std::endl;
std::cout << result << std::endl;
std::cout << "-----------------------------------------------" << std::endl;
}
else
{
soap_print_fault(w.soap, stderr);
return 0;
}
//check if the model failed / or aborted
if (progress=-2)
{
//model was aborted, so bail out
std::cout << "Model was aborted on server, quitting....." << std::endl;
}
return 0;
}