Quantcast
Channel: Accelerate your business
Viewing all 140 articles
Browse latest View live

KIE Server: Extend KIE Server client with new capabilities

$
0
0
Last but not least part of KIE Server extensions is about extending KIE Server Client with additional capabilities.

Use case

On top of what was built in second article (adding Mina transport to KIE Server), we need to add KIE Server Client extension that allow to use Mina transport with unified KIE Server Client API.

Before you start create empty maven project (packaging jar) with following dependencies:

<properties>
<version.org.kie>6.4.0-SNAPSHOT</version.org.kie>
</properties>

<dependencies>
<dependency>
<groupId>org.kie.server</groupId>
<artifactId>kie-server-api</artifactId>
<version>${version.org.kie}</version>
</dependency>

<dependency>
<groupId>org.kie.server</groupId>
<artifactId>kie-server-client</artifactId>
<version>${version.org.kie}</version>
</dependency>

<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-compiler</artifactId>
<version>${version.org.kie}</version>
</dependency>
</dependencies>

Design ServicesClient API interface

First thing we need to do is to decide what API we should have to be exposed to the callers of our Client API. Since the Mina extension is an extension on top of Drools one so let's provide same capabilities as RulesServicesClient:

public interface RulesMinaServicesClient extends RuleServicesClient {

}

As you can notice it simply extends the default RulesServiceClient interface and thus provide same capabilities. 

Why we need to have additional interface for it? It's because we are going to register client implementations based on their interface and there can be only one implementation for given interface.

Implement RulesMinaServicesClient

Next step is to actually implement the client and here we are going to prepare a socket based communication for simplicity sake. We could use Apache Mina client API though this would introduce additional dependency which we don't need for sample implementation.

Note that this client implementation is very simple and in many cases can be improved, but the point here is to show how it can be implemented rather than provide bullet proof code.

So few aspects to remember when reviewing the implementation:
  • it relies on default configuration from KIE Server client and thus uses serverUrl as place where to provide host and port of Mina server
  • hardcodes JSON as marshaling format
  • decision if the response is success or failure is based on checking if the received message is a JSON object (start with {) - very simple though works for simple cases
  • uses direct socket communication with blocking api while waiting for first line of the response and then reads up all lines that are available
  • does not use "stream mode" meaning it disconnects from the server after invoking command
Here is the implementation
public class RulesMinaServicesClientImpl implements RulesMinaServicesClient {

private String host;
private Integer port;

private Marshaller marshaller;

public RulesMinaServicesClientImpl(KieServicesConfiguration configuration, ClassLoader classloader) {
String[] serverDetails = configuration.getServerUrl().split(":");

this.host = serverDetails[0];
this.port = Integer.parseInt(serverDetails[1]);

this.marshaller = MarshallerFactory.getMarshaller(configuration.getExtraJaxbClasses(), MarshallingFormat.JSON, classloader);
}

public ServiceResponse<String> executeCommands(String id, String payload) {

try {
String response = sendReceive(id, payload);
if (response.startsWith("{")) {
return new ServiceResponse<String>(ResponseType.SUCCESS, null, response);
} else {
return new ServiceResponse<String>(ResponseType.FAILURE, response);
}
} catch (Exception e) {
throw new KieServicesException("Unable to send request to KIE Server", e);
}
}

public ServiceResponse<String> executeCommands(String id, Command<?> cmd) {
try {
String response = sendReceive(id, marshaller.marshall(cmd));
if (response.startsWith("{")) {
return new ServiceResponse<String>(ResponseType.SUCCESS, null, response);
} else {
return new ServiceResponse<String>(ResponseType.FAILURE, response);
}
} catch (Exception e) {
throw new KieServicesException("Unable to send request to KIE Server", e);
}
}

protected String sendReceive(String containerId, String content) throws Exception {

// content - flatten the content to be single line
content = content.replaceAll("\\n", "");

Socket minaSocket = null;
PrintWriter out = null;
BufferedReader in = null;

StringBuffer data = new StringBuffer();
try {
minaSocket = new Socket(host, port);
out = new PrintWriter(minaSocket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(minaSocket.getInputStream()));

// prepare and send data
out.println(containerId + "|" + content);
// wait for the first line
data.append(in.readLine());
// and then continue as long as it's available
while (in.ready()) {
data.append(in.readLine());
}

return data.toString();
} finally {
out.close();
in.close();
minaSocket.close();
}
}
}

Once we have the client interface and client implementation we need to make it available for KIE Service client to find it.

Implement KieServicesClientBuilder

org.kie.server.client.helper.KieServicesClientBuilder is the glue interface that allows to provide additional client apis to generic KIE Server Client infrastructure. This interface have two methods:
  • getImplementedCapability - which must much the server capability (extension) is going to use
  • build - which is responsible for providing map of client implementations where key is the interface and value fully initialized implementation
Here is a simple implementation of the client builder for this use case

public class MinaClientBuilderImpl implements KieServicesClientBuilder {

public String getImplementedCapability() {
return "BRM-Mina";
}

public Map<Class<?>, Object> build(KieServicesConfiguration configuration, ClassLoader classLoader) {
Map<Class<?>, Object> services = new HashMap<Class<?>, Object>();

services.put(RulesMinaServicesClient.class, new RulesMinaServicesClientImpl(configuration, classLoader));

return services;
}

}

Make it discoverable

Same story as for other extensions ... once we have all that needs to be implemented, it's time to make it discoverable so KIE Server Client can find and register this extension on runtime. Since KIE Server  Client is based on Java SE ServiceLoader mechanism we need to add one file into our extension jar file:

META-INF/services/org.kie.server.client.helper.KieServicesClientBuilder

And the content of this file is a single line that represents fully qualified class name of our custom implementation of  KieServicesClientBuilder.


How to use it

The usage scenario does not much differ from regular KIE Server Client use case:
  • create client configuration
  • create client instance
  • get service client by type
  • invoke client methods
Here is implementation that create KIE Server Client for RulesMinaServiceClient

protected RulesMinaServicesClient buildClient() {
KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration("localhost:9123", null, null);
List<String> capabilities = new ArrayList<String>();
// we need to add explicitly capabilities as the mina client does not respond to get server info requests.
capabilities.add("BRM-Mina");

configuration.setCapabilities(capabilities);
configuration.setMarshallingFormat(MarshallingFormat.JSON);

configuration.addJaxbClasses(extraClasses);

KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(configuration);

RulesMinaServicesClient rulesClient = kieServicesClient.getServicesClient(RulesMinaServicesClient.class);

return rulesClient;
}
And here is how it is used to invoke operations on KIE Server via Mina transport

RulesMinaServicesClient rulesClient = buildClient();

List<Command<?>> commands = new ArrayList<Command<?>>();
BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, "defaultKieSession");

Person person = new Person();
person.setName("mary");
commands.add(commandsFactory.newInsert(person, "person"));
commands.add(commandsFactory.newFireAllRules("fired"));

ServiceResponse<String> response = rulesClient.executeCommands(containerId, executionCommand);
Assert.assertNotNull(response);

Assert.assertEquals(ResponseType.SUCCESS, response.getType());

String data = response.getResult();

Marshaller marshaller = MarshallerFactory.getMarshaller(extraClasses, MarshallingFormat.JSON, this.getClass().getClassLoader());

ExecutionResultImpl results = marshaller.unmarshall(data, ExecutionResultImpl.class);
Assert.assertNotNull(results);

Object personResult = results.getValue("person");
Assert.assertTrue(personResult instanceof Person);

Assert.assertEquals("mary", ((Person) personResult).getName());
Assert.assertEquals("JBoss Community", ((Person) personResult).getAddress());
Assert.assertEquals(true, ((Person) personResult).isRegistered());

Complete code of this client extension can be found here.

And that's the last extension method to provide more features in KIE Server then given out of the box.

Thanks for reading the entire series of KIE Server extensions and any and all feedback welcome :)

Advanced queries in jBPM 6.4

$
0
0
While working with BPM, access to data that are being processed by the engine are very important. In many cases users would like to have options to easily and efficiently search for different data:

  • process instances started by...
  • process instances not completed until...
  • tasks assigned to ... for a given project
  • tasks not started for a given amount of time
  • process instances with given process variable(s)
  • tasks with given task variable(s)
These are just few examples of advanced queries that are useful but might be tricky to provide out of the box because:
  • different data bases have different capabilities when it comes to efficient searches
  • ORM in between adds layer of complexity while it helps to mitigate db differences
  • out of the box solution relies on compile time data - that can be used in queries - like jpa entities
  • not possible to build data structure that will fit all cases and that will be efficient to query on

Again, just few items that makes the query out of the box limited in terms of functionality. jBPM in version 6.3 comes with efficient query builders based on JPA Criteria API that aims at solving many issues that are listed above but is blocked by compile time dependency as this is JPA based solution so the entity manager must be aware of all possible types used in queries.

What's new in 6.4?

jBPM 6.4 comes with solution to address these problems. And this solution is based on DashBuilder DataSets. DataSets are like data base views - users can define them to pre filter and aggregate data before they will be queried or filtered if you like.

QueryService is part of jbpm services api - a cross framework api build to simplify usage of jBPM in embedded use case. At the same time jbpm services api is backbone of both KIE workbench and KIE Server (with its BPM capabilities).

QueryService exposes simple yet powerful set of operations:
  • Management operations
    • register query definition
    • replace query definition
    • unregister query definition
    • get query definition
    • get queries
  • Runtime operations
    • query - with two flavors:
      • simple based on QueryParam as filter provider
      • advanced based on QueryParamBuilder as filter provider 
DashBuilder DataSets provide support for multiple data sources (CSV, SQL, elastic search, etc) while jBPM - since its backend is RDBMS based - focuses on SQL based data sets. So jBPM QueryService is a subset of DashBuilder DataSets capabilities to allow efficient queries with simple API.

How to use it?

Let's define use case that we can use throughout this article...
We are about to sale software and for doing that we define very simple process that deal with the sale operation. For that we have data model defined that represents our produce sale:
ProductSale:
   String productCode
String country
Double price
Integer quantity
Date saleDate



As you can see the process is very simple but aims at doing few important things:
  • make use of both processes and user tasks
  • deals with custom data model as process and user task
  • allows to store externally process and task variables (here as JPA entity)
To be able to take advantage of the advanced queries we need to make sure we have various data being processed by jBPM so we can actually measure properly how easy we can find the relevant data. For that we create 10 000 process instances (and by that 10 000 user tasks) that we can then try to search for using different criteria.

Define query definitions

First thing user needs to do is to define data set - view of the data you want to work with - so called QueryDefinition in services api. 

SqlQueryDefinition query = new SqlQueryDefinition("getAllProcessInstances", "java:jboss/datasources/ExampleDS");
query.setExpression("select * from processinstancelog");

queryService.registerQuery(query);

This is the simplest possible query definition as it can be:

  • constructor takes 
    • a unique name that identifies it on runtime
    • data source JNDI name used when performing queries on this definition - in other words source of data
  • expression - the most important part - is the sql statement that builds up the view to be filtered when performing queries
Once we have the sql query definition we can register it so it can be used later for actual queries.

Perform basic queries

Next make use of it by using queryService.query methods:

Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext());

What happened here...

  • we referenced the registered query by name - getAllProcessInstances
  • we provided ProcessInstanceQueryMapper that will be responsible for mapping our data to object instances
  • we provided default query context that enables paging and sorting
Let's see it with query context configuration...

QueryContext ctx = new QueryContext(0, 100, "start_date", true);

Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), ctx);

here we search the same query definition (data set) but we want to get 100 results starting at 0 and we want to have it with ascending order by start date.

But that's not advanced at all... it just doing paging and sorting on single table... so let's add filtering to the mix.

// single filter param
Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%"));

// multiple filter params (AND)
Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(),
QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%"),
QueryParam.equalsTo(COLUMN_STATUS, 1, 3));

here we have filtered our data set:

  • first query - by process id that matches "org.jbpm%"
  • second query - by process id that matches "org.jbpm%" and status is in active or aborted

but that's still not very advanced, isn't it?? Let's look at how to work with variables.

Perform queries with process and task variables

Common use case is to find process instances or tasks that have given variable or have given variable with particular value.

jBPM from version 6.4 indexes task variables (and in previous versions it already did that for process instance variables) in data base. The indexation mechanism is configurable but default is to simple toString on the variable and keep it in single table:

  • Process instance variables - VariableInstanceLog table
  • Task variables - TaskVariableImpl table
equipped with this information we can define data sets that will allow us to query for task and process variables.

// process instances with variables
SqlQueryDefinition query = new SqlQueryDefinition("getAllProcessInstancesWithVariables", "java:jboss/datasources/ExampleDS");
query.setExpression("select pil.*, v.variableId, v.value " +
"from ProcessInstanceLog pil " +
"INNER JOIN (select vil.processInstanceId ,vil.variableId, MAX(vil.ID) maxvilid FROM VariableInstanceLog vil " +
"GROUP BY vil.processInstanceId, vil.variableId ORDER BY vil.processInstanceId) x " +
"ON (v.variableId = x.variableId AND v.id = x.maxvilid )" +
"INNER JOIN VariableInstanceLog v " +
"ON (v.processInstanceId = pil.processInstanceId)");

queryService.registerQuery(query);

// tasks with variables
query = new SqlQueryDefinition("getAllTaskInputInstancesWithVariables", "java:jboss/datasources/ExampleDS");
query.setExpression("select ti.*, tv.name tvname, tv.value tvvalue "+
"from AuditTaskImpl ti " +
"inner join (select tv.taskId, tv.name, tv.value from TaskVariableImpl tv where tv.type = 0 ) tv "+
"on (tv.taskId = ti.taskId)");

queryService.registerQuery(query);

now we have registered new query definitions that will allow us to search for process and task and return variables as part of the query.

NOTE: usually when defining query definitions we don't want to have always data set to be same as the source tables so it's good practice to initially narrow down the amount of data for example by defining it for given project (deploymentId) or process id etc. Keep in mind that you can have query definitions as many as you like.

Now it's time to make use of these queries to fetch some results

Get process instances with variables:

List<ProcessInstanceWithVarsDesc> processInstanceLogs = queryService.query("getAllProcessInstancesWithVariables", ProcessInstanceWithVarsQueryMapper.get(), new QueryContext(), QueryParam.equalsTo(COLUMN_VAR_NAME, "approval_document"));

So we are able to find process instances that have variable called 'approval_document'...

Get tasks with variables:

List<UserTaskInstanceWithVarsDesc> taskInstanceLogs = queryService.query("getAllTaskInputInstancesWithVariables", UserTaskInstanceWithVarsQueryMapper.get(), new QueryContext(), 
QueryParam.equalsTo(COLUMN_TASK_VAR_NAME, "Comment"),
QueryParam.equalsTo(COLUMN_TASK_VAR_VALUE, "Write a Document"));

... and here we can find tasks that have task variable 'Comment' and with value 'Write a Document'.

So a bit of a progress with more advanced queries but still nothing that couldn't be done with out of the box queries. Main limitation with out of the box variables indexes is that they are always stored as string and thus cannot be efficiently compared on db side like using operators >, < between, etc

... but wait with query definitions you can take advantage of the SQL being used to create your data view and by that use data base specific functions that can cast or convert string into different types of data. With this you can tune the query definition to provide you with subset of data with converted types. But of course that comes with performance penalty depending on the conversion type and amount of data.

So another level of making this use case covered is to externalize process and task variables (at least some of them that shall be queryable) and keep them in separate table(s). jBPM comes with so called pluggable variable persistence strategies and ships out of the box JPA based one. So you can create your process variable as entity and thus it will be stored in separate table. You can then take advantage of mapping support (org.drools.persistence.jpa.marshaller.VariableEntity) that ensures that mapping between your entity and process instance/task will be maintained.

Here is sample ProductSale object that is defined as JPA entity and will be stored in separate table

@javax.persistence.Entity
public class ProductSale extends org.drools.persistence.jpa.marshaller.VariableEntity implements java.io.Serializable
{

static final long serialVersionUID = 1L;

@javax.persistence.GeneratedValue(strategy = javax.persistence.GenerationType.AUTO, generator = "PRODUCTSALE_ID_GENERATOR")
@javax.persistence.Id
@javax.persistence.SequenceGenerator(name = "PRODUCTSALE_ID_GENERATOR", sequenceName = "PRODUCTSALE_ID_SEQ")
private java.lang.Long id;

private java.lang.String productCode;

private java.lang.String country;

private java.lang.Double price;

private java.lang.Integer quantity;

private java.util.Date saleDate;

public ProductSale()
{
}

public java.lang.Long getId()
{
return this.id;
}

public void setId(java.lang.Long id)
{
this.id = id;
}

public java.lang.String getProductCode()
{
return this.productCode;
}

public void setProductCode(java.lang.String productCode)
{
this.productCode = productCode;
}

public java.lang.String getCountry()
{
return this.country;
}

public void setCountry(java.lang.String country)
{
this.country = country;
}

public java.lang.Double getPrice()
{
return this.price;
}

public void setPrice(java.lang.Double price)
{
this.price = price;
}

public java.lang.Integer getQuantity()
{
return this.quantity;
}

public void setQuantity(java.lang.Integer quantity)
{
this.quantity = quantity;
}

public java.util.Date getSaleDate()
{
return this.saleDate;
}

public void setSaleDate(java.util.Date saleDate)
{
this.saleDate = saleDate;
}
}

When such entity is then used as process or task variable it will be stored in productsale table and referenced as mapping in mappedvariable table so it can be joined to find process or task instances holding that variable.

Here we can make use of different types of data in that entity - string, integer, double, date, long and by that make use of various type aware operators to filter efficiently data. So let's define another data set that will provide use with tasks that can be filtered by product sale details.

// tasks with custom variable information
SqlQueryDefinition query = new SqlQueryDefinition("getAllTaskInstancesWithCustomVariables", "java:jboss/datasources/ExampleDS");
query.setExpression("select ti.*, c.country, c.productCode, c.quantity, c.price, c.saleDate " +
"from AuditTaskImpl ti " +
" inner join (select mv.map_var_id, mv.taskid from MappedVariable mv) mv " +
" on (mv.taskid = ti.taskId) " +
" inner join ProductSale c " +
" on (c.id = mv.map_var_id)");

queryService.registerQuery(query);

// tasks with custom variable information with assignment filter
SqlQueryDefinition queryTPO = new SqlQueryDefinition("getMyTaskInstancesWithCustomVariables", "java:jboss/datasources/ExampleDS", Target.PO_TASK);
queryTPO.setExpression("select ti.*, c.country, c.productCode, c.quantity, c.price, c.saleDate, oe.id oeid " +
"from AuditTaskImpl ti " +
" inner join (select mv.map_var_id, mv.taskid from MappedVariable mv) mv " +
" on (mv.taskid = ti.taskId) " +
" inner join ProductSale c " +
" on (c.id = mv.map_var_id), " +
" PeopleAssignments_PotOwners po, OrganizationalEntity oe " +
" where ti.taskId = po.task_id and po.entity_id = oe.id");

queryService.registerQuery(queryTPO);

here we registered two additional query definitions:

  • first to load into data set both task info and product sale info
  • second same as first but joined with potential owner information to get tasks only for authorized users
In second query you can notice third parameter in the constructor which defines the target - this is mainly to instruct QueryService to apply default filters like user or group filter for potential. Same filter parameters can be set manually so it's just short cut given by the API.



Marked in blue are variables from custom table and in orange task details

Now we can perform queries that will benefit from externally stored variable information to be able to find tasks by various properties (of different types) using various operators

Map<String, String> variableMap = new HashMap<String, String>();
variableMap.put("COUNTRY", "string");
variableMap.put("PRODUCTCODE", "string");
variableMap.put("QUANTITY", "integer");
variableMap.put("PRICE", "double");
variableMap.put("SALEDATE", "date");

//let's find tasks for product EAP and country Brazil and tasks with status Ready and Reserved");
List<UserTaskInstanceWithVarsDesc> taskInstanceLogs = queryService.query(query.getName(),
UserTaskInstanceWithCustomVarsQueryMapper.get(variableMap), new QueryContext(),
QueryParam.equalsTo("productCode", "EAP"),
QueryParam.equalsTo("country", "Brazil"),
QueryParam.in("status", Arrays.asList(Status.Ready.toString(), Status.Reserved.toString())));


// now let's search for tasks that are for EAP and sales data between beginning and end of February
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");

Date from = sdf.parse("2016-02-01");
Date to = sdf.parse("2016-03-01");
taskInstanceLogs = queryService.query(query.getName(),
UserTaskInstanceWithCustomVarsQueryMapper.get(variableMap), new QueryContext(),
QueryParam.equalsTo("productCode", "EAP"),
QueryParam.between("saleDate", from, to),
QueryParam.in("status", Arrays.asList(Status.Ready.toString(), Status.Reserved.toString())));


Here you can see how easy and efficient queries can be using variables stored externally. You can take advantage of type based operators to effectively narrow down the results.

As you might have noticed, this time we use another type of mapper - UserTaskInstanceWithCustomVarsQueryMapper - that is responsible for mapping both task information and custom variable. Thus we need to provide column mapping - name and type - so mapper know how to read data from data base to preserve the actual type.

Mappers are rather powerful and thus are pluggable, you can implement your own mappers that will transform the result into whatever type you like. jBPM comes with following mappers out of the box:

  • org.jbpm.kie.services.impl.query.mapper.ProcessInstanceQueryMapper
    • registered with name - ProcessInstances
  • org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithVarsQueryMapper
    • registered with name - ProcessInstancesWithVariables
  • org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithCustomVarsQueryMapper
    • registered with name - ProcessInstancesWithCustomVariables
  • org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceQueryMapper
    • registered with name - UserTasks
  • org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithVarsQueryMapper
    • registered with name - UserTasksWithVariables
  • org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithCustomVarsQueryMapper
    • registered with name - UserTasksWithCustomVariables
  • org.jbpm.kie.services.impl.query.mapper.TaskSummaryQueryMapper
    • registered with name - TaskSummaries


Mappers are registered by name to simplify lookup of them and to avoid compile time dependency to actual mapper implementation. Instead you can use:

org.jbpm.services.api.query.NamedQueryMapper

that simple expects the name of the actual mapper that will be resolved on time when the query is performed

Here you can find complete product-sale project that can be imported into KIE Workbench for inspection and customization.

QueryParamBuilder

Last but not least is the QueryParamBuilder that provides more advanced way of building filters for our data sets. By default when using query method of QueryService that accepts zero or more QueryParam instances (as we have seen in above examples) all of these params will be joined with AND operator meaning all of them must match. But that's not always the case so that's why QueryParamBuilder has been introduced so users can build up their on builders can provide them at the time the query is issued.
QueryParamBuilder is simple interface that is invoked as long as its build method returns non null value before query is performed. So you can build up an complex filter options that could not be simply expressed by list of QueryParams.

Here is basic implementation of QueryParamBuilder to give you a bit of jump start to implement your own - note that it relies on DashBuilder Dataset API.

public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> {

private Map<String, Object> parameters;
private boolean built = false;
public TestQueryParamBuilder(Map<String, Object> parameters) {
this.parameters = parameters;
}

@Override
public ColumnFilter build() {
// return null if it was already invoked
if (built) {
return null;
}

String columnName = "processInstanceId";

ColumnFilter filter = FilterFactory.OR(
FilterFactory.greaterOrEqualsTo((Long)parameters.get("min")),
FilterFactory.lowerOrEqualsTo((Long)parameters.get("max")));
filter.setColumnId(columnName);

built = true;
return filter;
}

}


This concludes introduction to new QueryService based on Dashbuilder Dataset API to allow tailored queries against all possible data including (but not being limited to) jBPM data.

This article focused on jbpm services api but this functionality is also available in KIE Server for remote use cases. Stay tuned for another article describing remote capabilities.


Advanced queries in KIE Server

$
0
0
As a follow up of Advanced queries in jBPM 6.4 article let's take a look at queries in KIE Server - BPM capability.
Since KIE Server's BPM capability is based on jbpm services api, it does provide access to QueryService and its advanced (DashBuilder DataSets based) operations.

We are going to use the same use case, product sale with 10 000 loaded process and task instances. Next we show how you can query data both via KIE Server client and directly via raw REST api.

KIE Server capabilities when it comes to advanced queries mirrors what's available in services api, so users can:

  • register query definitions
  • replace  query definitions
  • unregister query definitions
  • get list of queries or individual query definition
  • execute queries on top of query definitions with 
    • paging and sorting
    • filter parameters
    • query with custom param builder and mappers
So let's start simple and build our KIE Server client to use query services:

KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration(serverUrl, user, password);

Set<Class<?>> extraClasses = new HashSet<Class<?>>();
extraClasses.add(Date.class); // for JSON only to properly map dates

configuration.setMarshallingFormat(MarshallingFormat.JSON);
configuration.addJaxbClasses(extraClasses);

KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(configuration);

QueryServicesClient queryClient = kieServicesClient.getServicesClient(QueryServicesClient.class);

now we are ready to make use of the query service via QueryServicesClient

List available query definitions available in the system

List<QueryDefinition> queryDefs = queryClient.getQueries(0, 10);
System.out.println(queryDefs);

Next let's register new query definition that we can use for advanced queries

QueryDefinition query = new QueryDefinition();
query.setName("getAllTaskInstancesWithCustomVariables");
query.setSource("java:jboss/datasources/ExampleDS");
query.setExpression("select ti.*, c.country, c.productCode, c.quantity, c.price, c.saleDate " +
"from AuditTaskImpl ti " +
" inner join (select mv.map_var_id, mv.taskid from MappedVariable mv) mv " +
" on (mv.taskid = ti.taskId) " +
" inner join ProductSale c " +
" on (c.id = mv.map_var_id)");

queryClient.registerQuery(query);

Once the query is registered with can make use of it and start fetching data. At first very basic query:

List<TaskInstance> tasks = queryClient.query("getAllTaskInstancesWithCustomVariables", "UserTasks", 0, 10, TaskInstance.class);
System.out.println(tasks);

this will return task instances directly from the data set without any filtering and use UserTasks mapper to build up object representation and apply paging - first page and 10 results at most.

Now it's time to use more advanced queries capabilities and start filtering by process variables. As described in the Advanced queries in jBPM 6.4 article to be able to map custom variables we need to provide their column mapping - name and type. Following is an example that searches for tasks that:

  • processInstanceId is between 1000 and 2000 - number range condition
  • price is over 800 - number comparison condition
  • sale date is between 01.02.2016 and 01.03.2016 - date range condition
  • product in sale are EAP or Wildfly - logical and group condition
  • order descending by saleDate and country

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");

Date from = sdf.parse("2016-02-01");
Date to = sdf.parse("2016-03-01");

QueryFilterSpec spec = new QueryFilterSpecBuilder()
.between("processInstanceId", 1000, 2000)
.greaterThan("price", 800)
.between("saleDate", from, to)
.in("productCode", Arrays.asList("EAP", "WILDFLY"))
.oderBy("saleDate, country", false)
.addColumnMapping("COUNTRY", "string")
.addColumnMapping("PRODUCTCODE", "string")
.addColumnMapping("QUANTITY", "integer")
.addColumnMapping("PRICE", "double")
.addColumnMapping("SALEDATE", "date")
.get();

List<TaskInstance> tasks = queryClient.query("getAllTaskInstancesWithCustomVariables", "UserTasksWithCustomVariables", spec, 0, 10, TaskInstance.class);
System.out.println(tasks);

The query in above example uses QueryFilterSpec (and its builder) that allows to specify query parameters and sorting options. In addition it allows to specify column mapping for custom elements to be set as variables next to default column for task details. These column mappings are then delivered to mapper for transforming results - in this case we used built in mapper UserTasksWithCustomVariables that will collect all data details and given column mappings as custom variables data.

QueryFilterSpec maps to use of QueryParams in services api so in inherits the same limitation - all conditions are AND based and thus means all must match to get a hit.

To overcome the problem, services api introduced QueryParamBuilder so users can build advanced filters. Similar is on KIE Server, though they need to be built and included in one of following:

  • KIE Server itself (like in WEB-INF/lib)
  • Inside a project - kjar
  • Inside a project's dependency
Implementing QueryParamBuilder to be used in KIE Server requires a factory so it can be discovered and created on query time - every time query is issues new instance of QueryParamBuilder will be requested with given parameters.

Using QueryParamBuilder in KIE Server

To be able to use QueryParamBuilder user needs to:
  • Implement QueryParamBuilder that will produce new instance every time is requested and given a map of parameters

public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> {

private Map<String, Object> parameters;
private boolean built = false;
public TestQueryParamBuilder(Map<String, Object> parameters) {
this.parameters = parameters;
}

@Override
public ColumnFilter build() {
// return null if it was already invoked
if (built) {
return null;
}

String columnName = "processInstanceId";

ColumnFilter filter = FilterFactory.OR(
FilterFactory.greaterOrEqualsTo(((Number)parameters.get("min")).longValue()),
FilterFactory.lowerOrEqualsTo(((Number)parameters.get("max")).longValue()));
filter.setColumnId(columnName);

built = true;
return filter;
}

}
Above builder will produce filter that will accept processInstanceId that are grater that min or lower that max. Where min and max are given on each query issued as part of the request.
  • Implement QueryParamBuilderFactory 
public class TestQueryParamBuilderFactory implements QueryParamBuilderFactory {

@Override
public boolean accept(String identifier) {
if ("test".equalsIgnoreCase(identifier)) {
return true;
}
return false;
}

@Override
public QueryParamBuilder newInstance(Map<String, Object> parameters) {
return new TestQueryParamBuilder(parameters);
}

}
Factory is responsible for returning new instances of the query param builder only if the given identifier is accepted by the factory. Identifier is given as part of query request and there can be only one query builder factory selected based on the identifier. In this case "test" identifier needs to be given to use this factory, and in turn query param builder.

There is last tiny bit required to make this to work - we need to make it discoverable so let's add service file into META-INF folder of the jar that will package these implementation.

META-INF/services/org.jbpm.services.api.query.QueryParamBuilderFactory

where the content of this file is fully qualified class name of the factory.

with this we can issue a request that will make use of newly created query builder for advanced filters:

Map<String, Object> params = new HashMaplt;String, Object>();
params.put("min", 10);
params.put("max", 20);

Listlt;TaskInstance> instances = queryClient.query("getAllTaskInstancesWithCustomVariables", "UserTasksWithCustomVariables", "test", params, 0, 10, TaskInstance.class);
So what we have done here:

  • reference registered query by name - getAllTaskInstancesWithCustomVariables
  • reference mapper by name - UserTasksWithCustomVariables
  • reference query param builder identifier - test
  • sent params (min and max) that will be used by new instance of query builder before query is executed

Similar to this you can register and use custom mappers and it is even simpler than query param builders as there is no need for factory as services api comes with registry that KIE Server uses to register found mappers by ServiceLoader based discovery.

Implement mapper so it can be used in KIE Server:


public class ProductSaleQueryMapper extends UserTaskInstanceWithCustomVarsQueryMapper {

private static final long serialVersionUID = 3299692663640707607L;

public ProductSaleQueryMapper() {
super(getVariableMapping());
}

protected static Map<String, String> getVariableMapping() {
Map<String, String> variablesMap = new HashMap<String, String>();

variablesMap.put("COUNTRY", "string");
variablesMap.put("PRODUCTCODE", "string");
variablesMap.put("QUANTITY", "integer");
variablesMap.put("PRICE", "double");
variablesMap.put("SALEDATE", "date");

return variablesMap;
}

@Override
public String getName() {
return "ProductSale";
}
}

Here we simply extend the UserTaskInstanceWithCustomVarsQueryMapper and provide directly column mapping so it can be used without column mapping on request level. To be able to use it, mapper needs to be made discoverable so we need to create service file within META-INF folder of the jar that will package this implementation.

META-INF/services/org.jbpm.services.api.query.QueryResultMapper

where the content of this file is fully qualified class name of the mapper.

Now we can directly use it by referencing it by name:

List<TaskInstance> tasks = queryClient.query("getAllTaskInstancesWithCustomVariables", "ProductSale", 0, 10, TaskInstance.class);
System.out.println(tasks);

Raw REST API use of described examples

Get query definitions
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/definitions?page=0&pageSize=10
Method:
  • GET

Register query definition
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/definitions/getAllTaskInstancesWithCustomVariables
Method:
  • POST
Request body:
{
  "query-name" : "getAllTaskInstancesWithCustomVariables1",
  "query-source" : "java:jboss/datasources/ExampleDS",
  "query-expression" : "select ti.*,  c.country, c.productCode, c.quantity, c.price, c.saleDate from AuditTaskImpl ti     inner join (select mv.map_var_id, mv.taskid from MappedVariable mv) mv       on (mv.taskid = ti.taskId)     inner join ProductSale c       on (c.id = mv.map_var_id)",
  "query-target" : "CUSTOM"

}

Query for tasks - no filtering
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/definitions/getAllTaskInstancesWithCustomVariables/data?mapper=UserTasks&orderBy=&page=0&pageSize=10
Method:
  • GET


Query with filter spec
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/definitions/getAllTaskInstancesWithCustomVariables/filtered-data?mapper=UserTasksWithCustomVariables&page=0&pageSize=10
Method:
  • POST
Request body:
{
  "order-by" : "saleDate, country",
  "order-asc" : false,
  "query-params" : [ {
    "cond-column" : "processInstanceId",
    "cond-operator" : "BETWEEN",
    "cond-values" : [ 1000, 2000 ]
  }, {
    "cond-column" : "price",
    "cond-operator" : "GREATER_THAN",
    "cond-values" : [ 800 ]
  }, {
    "cond-column" : "saleDate",
    "cond-operator" : "BETWEEN",
    "cond-values" : [ {"java.util.Date":1454281200000}, {"java.util.Date":1456786800000} ]
  }, {
    "cond-column" : "productCode",
    "cond-operator" : "IN",
    "cond-values" : [ "EAP", "WILDFLY" ]
  } ],
  "result-column-mapping" : {
    "PRICE" : "double",
    "PRODUCTCODE" : "string",
    "COUNTRY" : "string",
    "SALEDATE" : "date",
    "QUANTITY" : "integer"
  }
}

Query with custom query param builder
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/definitions/getAllTaskInstancesWithCustomVariables/filtered-data?mapper=UserTasksWithCustomVariables&builder=test&page=0&pageSize=10
Method:
  • POST
Request body:
{
  "min" : 10,
  "max" : 20
}

Query for tasks - custom mapper
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/definitions/getAllTaskInstancesWithCustomVariables/data?mapper=ProductSale&orderBy=&page=0&pageSize=10
Method:
  • GET

With this, we have went over support for advanced queries in KIE Server for BPM capability.

As usual, feedback is welcome :)



Are you ready to dive into (wildfly) swarm?

$
0
0
KIE Server is a lightweight execution server that comes with various capabilities where out of the box are following:

  • BRM - rules execution (Drools)
  • BPM - business process execution, task management, background jobs (jBPM)
  • BPM-UI - visualize your bpm components on runtime such as process definition and instance (since 6.4)
  • BRP - business resource planning (Optaplanner) (since 6.4)
It's by default packaged as JEE application (web archive) and deployed to various containers, such as:
  • JBoss EAP
  • Wildfly
  • Tomcat
  • WebLogic
  • WebSphere
While all this is already quite nice coverage, we don't stay idle and do work on bringing more to you. Let's see what's coming next...

All the hype about micro service is bringing in tons of new stuff that allows alternative approach for packaging and deployment of our systems or services if you like. Taking into consideration what capabilities KIE Server comes with it would be a crime to not take advantage of it to start building micro services with it. Instead of rewriting all the stuff in different way.


It's time to introduce Wildfly Swarm (to those that haven't heard about it yet) ...


Swarm offers an innovative approach to packaging and running JavaEE applications by packaging them with just enough of the platform to "java -jar" your application
So what Wildfly Swarm means in context of KIE Server?

Actually it means a lot:

  • first of all it allows us to build executable jars that will bring in KIE Server capabilities to simple java -jar way of working with all it's power!
  • next you can have a "executable kjar" just by starting it with argument that identifies kjar to be available for execution (Group Artifact Version)
  • you can still run in managed mode - connected to controller and managed from within controller but without a need to provision your application server

With this in mind let's take a look at how to use it with Wildfly Swarm.

  • Clone this repository kie-server-swarm into your local environment.
  • Build the project with maven (mvn clean package)
    • Make sure you run it with latest Maven otherwise you might run into build errors - I tested it with 3.3.9 so it works certainly with it
  • Once it's successfully build you'll find following file inside the target folder
    • kie-server-swarm-1.0-swarm.jar
  • Now you're ready to rock with KIE Server on Wildfly Swarm

but before we start our KIE Server on Swarm, let's look at what options we have for the project we just built. This project, same as KIE Server, is modularized and allows us to pick only the things we are interested with. While KIE Server allows to disable extensions on runtime (via system properties) sometimes it does not make sense to bring in lots of dependencies if they are not going to be used.

So you can build the project with following profiles:
  • BRM - includes BRM capability of the KIE Server that allows rules execution only
    • no server components besides REST is configured
    • build it with - mvn clean package -PBRM
  • BPM - includes both BRM and BPM capabilities of the KIE Server - this is the default profile
    • configures Swarm to have transactions and data sources enabled
    • build it with - mvn clean package -PBPM or mvn clean package
So why it's important to have it done as profiles? It's because the size of resulting file (executable jar) will be smaller. Moreover it reduces number of things Swarm is going to configure and boot when we start our system. So keep this in mind as it might become handy one day or another :)

Let's get our hands dirty with running KIE Server on Wildfly Swarm


First thing, let's just start empty server that will let us manage it manually - creating containers, running rules and processes via REST api

Make sure you're in the project folder (where you executed maven build) and then simply run this command:

java -Dorg.kie.server.id=swarm-kie-server -Dorg.kie.server.location=http://localhost:8380/server -Dswarm.port.offset=300 -jar target/kie-server-swarm-1.0-swarm.jar




Wait for a while to boot Wildfly Swarm and KIE Server on it. Once it's completed you should be able to access it at http://localhost:8380/server

NOTE: since KIE Server requires authentication, whenever you attempt to access its REST endpoints you need to logon - by default you should be able to logon with kieserver/kieserver1!
you can customize users and roles by editing following files:
kie-server-swarm/src/main/config/security/application-users.properties
kie-server-swarm/src/main/config/security/application-roles.properties

Now let's examine a bit what all these parameters mean:

  •  -Dorg.kie.server.id=swarm-kie-server - specifies the unique identifier of the kie server - it is important when running in managed mode but good to use it always to make it a habit
  • -Dorg.kie.server.location=http://localhost:8380/server - specifies the actual location where our KIE Server is going to be available - this must be a direct URL to actual instance even it if's behind load balancer - again important when running in managed mode
  • -Dswarm.port.offset=300 - sets global port offset to avoid port conflicts when running many instances of wildfly on same machine

Next, let's run our first executable KJAR... to do so we just extend the command from first run and add arguments to the execution

java -Dorg.kie.server.id=swarm-kie-server -Dorg.kie.server.location=http://localhost:8380/server -jar target/kie-server-swarm-1.0-swarm.jar org.jbpm:HR:1.0




as you can see the only difference is:
org.jbpm:HR:1.0
which is GAV of a KJAR that is going to be deployed upon start of KIE Server on Swarm. So with just single command line we have fully functional server with BPM capabilities and HR project deployed to it.

Last but not least, let's run it in fully managed way - with controller.
Before you start wildfly Swarm with KIE Server, make sure you start controller (KIE workbench) so you'll see how nicely it registers automatically upon start.

Once controller (workbench) is running issue following command:

java -Dorg.kie.server.id=swarm-kie-server -Dorg.kie.server.location=http://localhost:8380/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller -jar target/kie-server-swarm-1.0-swarm.jar




Again, a singe parameter difference between the first command we used to start empty KIE Server on Swarm - in this case it's controller url:
  • -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
Make sure that this URL matches your controller being deployed - it can differer in terms of

  • host (localhost in this case)
  • port (8080 in this case)
  • context root (kie-wb in this case)
Now you're ready to rock with Wildfly Swarm and KIE Server to build your own micro services backed by business knowledge.

Enjoy your dive into Swarm and as usual comments are more than welcome.

jBPM UI extension on KIE Server

$
0
0
KIE Server that was first released in 6.3.0.Final with jBPM capabilities (among others) was purely focused on execution. Though it was lacking part of functionality BPM users expects:

  • process diagram visualization 
  • process instance diagram visualization
  • process and task forms information 
Since KIE Server is execution server thus it does not come with any UI and to be able to interact with it a custom UI needs to be built. Technology used to build such UI does not really matter and is left to developers to choose. Though certain parts should be possible to get out from KIE Server to improve the UI capabilities.

One of the most desired use case is to be able to visualize state of given process instance - including graphical annotations about which nodes are active and which are already completed, showing complete flow of the process instance.

This has been added to KIE Server as part of jBPM UI extensions and provides following capabilities:
  • display process definition diagram as SVG
  • display annotated process instance diagram as SVG
    • greyed out are completed nodes
    • marked as red are active nodes
  • display structure of process forms
  • display structure of task forms
While displaying process diagrams is self-explanatory, then operation around forms might be bit confusing. So let's go over them first to understand their usage. 

Primary authoring environment is KIE workbench where users can build various assets such as processes, rules, decision tables, data model and forms. Forms in workbench are built with Form Modeler that allows good integration with process and task variables providing binding between inputs and outputs - how data are taken out from process/task variable and displayed in the form and vice-versa how form data is put back to process variables.

Since KIE Server does not provide any UI it does not allow to render nor process forms. It simply expect the data to be given that will be mapped (by name) to their process or task variables. While this is completely ok from execution point of view, it's not so great from UI and data collection stand point. So to ease a bit in this area, KIE Server is now capable to return form structure that can be used later on to render the form with whatever UI technology/framework you like.

Let's take a test drive of it. We will use our well known HR example to guide you through the usage of this UI support jBPM extension to KIE Server.

Form operations

First endpoint we are going to discuss is to get process form for given process definition - similar to what you get when you start a process instance in workbench.

Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/forms/processes/hiring
Method:
  • GET

where:
  • hr - is container id
  • hiring - is process id
When you issue this request you'll get following response:

You can notice few important properties there:
  • form/name - hiring-taskform - is the name of the form built in form modeler - you'll find it in workbench under "Form definitions" section in Project explorer
  • form/field/name is the name of the first field on that form
  • under field properties you can find lots of details and depending on your form design you'll see more or less data, though still important
    • fieldName
    • fieldRequired
    • readonly
    • inputBinding
    • outputBinding
This form structure directly translates to what KIE workbench will render when you start hiring process


Similar thing can be done for task forms with slightly different endpoint url as it refers to tasks (already active tasks)

Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/forms/tasks/123
Method:
  • GET
where:
  • hr - is container id
  • 123 - is task id

same content as in case of process forms are returned for tasks. You can notice that there are different data filled for different fields. Like some have inputBinding set some have outputBinding set. 

So this structure represents this form rendered by workbench:


So with this you can build a custom renderer that is based on same form structure that was designed in Form Modeler that comes with KIE workbench.

Note: In the above example the content is XML but by changing the Accept header to be application/json you'll get JSON content instead.

Image operations

There are two operations available - get "pure" process definition diagram or get annotated process instance diagram.

To get process diagram use following endpoint 
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/images/processes/hiring
Method:
  • GET
where:
  • hr - is container id
  • hiring - is process id
and this is what you'll get in your browser


To get annotated process instance, first of all you have to have process instance active and one you have its process instance id you can issue following 
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/images/processes/instances/123
Method:
  • GET
where:
  • hr - is container id
  • 123 - is process instance id
and you'll get this
Here you can see that start event is greyed out and thus means it was already completed and currently process instance is in HR Interview task.

Returned content for image operations are SVG - where its MIME type is: application/svg+xml

so make sure you have your client capable of displaying SVG content to properly display the diagrams. Note that all major browsers do support SVG and if you can display process diagram in KIE workbench with given browser you'll be fine.

Now, the most important configuration parameter to enable image operations. KIE workbench by default does not store SVG version of the process so that means such SVG will not be included in kjar and thus won't be available to KIE Server. To be able to take advantage of this feature you need to enable it in workbench configuration files.

Enable SVG on store in workbench

Edit file jbpm.xml that is stored in (depending on what installation you have):
  • jbpm installer: 
    • jbpm-console.war/org.kie.workbench.KIEWebapp/profiles/jbpm.xml
  • manual installation 
    • kie-wb{version-container}.war/org.kie.workbench.KIEWebapp/profiles/jbpm.xml
  • Red Har JBoss BPMS: 
    • business-central.war/org.kie.workbench.KIEWebapp/profiles/jbpm.xml
in this file you need to find 
        <storesvgonsave enabled="false"/>
and set it to true
        <storesvgonsave enabled="true"/>

Once this enabled (re)start workbench and go to your process definition to save it again (any modification will be required) and that will trigger SVG file for that process to be generated and stored in kjar.

Then deploy that kjar to KIE Server and you can enjoy KIE Server shipping process images for your custom UI.

That's it for the jBPM UI extensions that is coming with 6.4.0.Final very soon so stay tuned. 


Community extension to KIE Server - welcome Apache Thrift

$
0
0
In previous articles about KIE Server I described how it can be extended to bring in more features to it, starting with enhanced REST endpoints, through building addition transport layers, and finishing at building custom kie server client implementations.

It didn't take long and we got official confirmation that it works!!!

Maurice Betzel, has done excellent job and implemented KIE Server extensions that brings in Apache Thrift into the picture. That allowed him to bridge the gap between Java and PHP to make use of rule evaluation using KIE Server.

KIE Server with Apache Thrift


I'd like to encourage every one to look at detailed description about Maurice's work and take it for a spin to see how powerful it is.

All the credit goes to Maurice and I'd like to thank you as well for keeping me in the loop and verifying extensions mechanism of KIE Server in real life.

KIE Server clustering and scalability

$
0
0
This article is another in KIE Server series and this time we'll focus on clustering and scalability.

KIE Server by its nature is lightweight and easily scalable component. Comparing to execution environment included in KIE workbench it can be summarized with following:

  • allows to partition based on deployed containers (kjars)
    • in workbech all containers are deployed to same runtime
  • allows to scale individual instances independently from each other
    • in workbench scaling workbench means scaling all kjars
  • can be easily distributed across network and be managed by controller (workbench by default)
    • workbench is both management and execution which makes it a single point of failure
  • clustering of KIE Server does not include any additional components in the infrastructure 
    • workbench requires Zookeeper and Helix for clustered git repository
So what does it mean to scale KIE Server?
First of all it allows administrators to partition knowledge between different KIE Server instances. With that said, HR department related processes and rules can run on one set of KIE Server instances, while Finance department will have its own set of KIE Server instances. By that each department's administrator can easily scale based on the needs without affecting each other. That gives us unique opportunity to really focus on the components that do require additional processing power and simply add more instances - either on the same server or on different distributed across your network.

Let's look at most common runtime architecture for scalable KIE Server environment


As described above the basic runtime architecture will consists of multiple independent sets of KIE Servers where number of actual server instances can vary. In the above diagram all of them have three instances but in reality they can have as many (or as little) as needed.

Controller in turn will have three server templates - HR, Finance and IT. Each server template is then defined with identifier which is used by KIE Server instances via system property called org.kie.server.id.

In above screenshot, server templates are defined in the controller (workbench) which becomes single point of configuration and management of our KIE Servers. So administrators can add or remove, start or stop different containers and controller is responsible for notifying all KIE Server instances (that belongs to given server template) with performed operations. Moreover when new KIE server instances are added to the set they will directly receive all containers that should be started and by that increase processing power.

As mentioned, this is the basic setup, meaning actual usage of the server instances is by calling them directly - each individual KIE Server instance. This is a bit troublesome as users/callers will have to deal with instances that are down etc. So to solve this we can put load balancer in front of the kie servers and then utilize that load balancer to the heavy lifting for us. So users will simply call single URL which is then configured to work with all instances in the back end. One of the choices of a load balancer is Apache HTTP with ModCluster plugin for efficient and highly configurable load balancer.


In version 7, KIE Server client will come with pluggable load balancer implementation so whenever using KIE Server client users could simply skip additional load balancer as infrastructure component. Though it will provide load balancing and failure discovery support it's client side load balancer which has no knowledge of underlying backend servers and thus won't be as efficient as mod cluster can be.

So this covers, scalability of KIE Server instances as they can be easily multiplied to provide more power for execution and at the same time distribution both on the network and knowledge (containers) level. But looking at the diagram, a single point of failure is the controller. Remember that in managed mode (where KIE Server instances depend on controller) they are limited in case controller is down. Let's recap on how KIE Server interacts with controller:

  • when KIE Server starts it attempts to connect to any of the defined controllers (if any)
  • it will only connect to one once connection is successful
  • controller will then provide list of containers to deploy and configuration
  • based on this information KIE Server deploys and starts to serve requests
But what happens when none of the controllers can be reached when KIE Server starts? KIE Server will be pretty much useless as it does not know what container it should deploy. And will keep checking (with predefined intervals) if controller is available. So until controller is not available KIE Server has not containers deployed and thus won't process any requests - most likely response you'll get from KIE Server when trying to use it will be - no container found.

Note: This affects only KIE Servers that starts after controller went down, those that are currently running are not affected at all.

So to solve this problem workbench (and by that controller) should be scaled. Here default configuration of KIE workbench cluster applies, meaning with Apache Zookeeper and Apache Helix as part of the infrastructure.


In this diagram, we scale workbench by using Apache Zookeeper and Helix for cluster of GIT repository. This gives us replication between server instances (that runs workbench) and by that provides several controller endpoints (which are synchronized) to secure KIE Server instances can reach the controller and collect configuration and containers to be deployed.

Similar as it was for KIE Servers, controller can be either reached directly by independent endpoints or again fronted with load balancer. KIE Server allows to be given list of controllers so load balancer is not strictly required though recommended as workbench is also (or even primarily) used by end users who would be interested in load balanced environment as well.

So that would conclude the description of clustering and scalability of KIE server to gain most of it, let's now take a quick look what's important to know when configuring such setup.

Configuration

Workbench
We start with configuration of workbench - controller. The most important for controller is the authentication so connecting KIE Server instances will be authorized. By default KIE Server upon start will send request with Basic authentication corresponding to following credentials:
  • username: kieserver
  • password: kieserver1!
so to allow KIE Server to connect, make sure such user exists in application realm of your application server.

NOTE: the username and password can be changed on KIE Server side by setting following system properties:
  • org.kie.server.controller.user
  • org.kie.server.controller.pwd

This is the only thing needed on application server that hosts KIE workbench.

KIE Server
On KIE Server side, there are several properties that must be set on each KIE Server instance. Some of these properties must be same for all instances representing same server template defined in the controller.
  • org.kie.server.id - identifier of the kie server that corresponds to server template id. this must be exactly the same for all KIE Server instances that represent given server template
  • org.kie.server.controller - comma separated list of absolute URL to the controller(s). this must be the same for all KIE Server instances that represents given server template
  • org.kie.server.location - absolute URL where this KIE Server instance can be reached. This must be unique for each KIE Server instances as it's going to be used by controller to notify requested changes (e.g start/stop container). 
Similar to how workbench authenticates request KIE Server does the same, so to allow controller to connect to KIE Server instance (based on given URL as org.kie.server.location) application realm of the server where KIE Server instances are running must have configured user. By default workbench (controller) will use following credentials:
  • username: kieserver
  • password: kieserver1!
so it must exist in application realm. In addition it must be member of kie-server role so KIE Server will authorize it to its REST api.

NOTE: the username and password can be changed on KIE Workbench side by setting following system properties:
  • org.kie.server.user
  • org.kie.server.pwd
There are other system properties that can be set (and most likely will be needed depending on what configuration of KIE Server is needed). For those look at the documentation.

This configuration applies to any way you run KIE Server - on standalone Wildfly, domain mode of Wildfly, Tomcat, WAS or WebLogic. It does not really matter as long as you follow the set of properties you'll be ready to go with clustered and scalable KIE Server instances that are tailored to your domain.

That would be all for today, as usual comments are more than welcome.

jBPM v7 - workbench and kie server integration

$
0
0
As part of ongoing development of jBPM version 7 I'd like to give a short preview of one of the changes that are coming. This in particular relates to changes how workbench and kie server are integrated.  In version 6 (when kie server was introduced with BPM capability) we have had two independent execution servers:

  • one embedded in workbench 
  • another in in kie server
in many cases it caused bit of confusion as users were expecting to see processes (and tasks, jobs etc) created in kie server vie workbench UI. To achieve that users where pointing workbench and kie server to same data base and that lead to number of unexpected issues as these two were designed in different way and were not intended to work in parallel.

jBPM version 7 is addressing this problem in two ways:
  • removes duplication when it comes to execution servers - only kie server will be available - no execution in workbench
  • integrates workbench with kie server(s) so its runtime views (e.g. process instances, definitions, tasks) can be used with kie server as backend

While the first point is rather clear and obvious, the second takes a bit to see its full power. It's not only about letting users use workbench UI to start processes or interact with user tasks, it actually makes the flexible architecture of kie server to be fully utilized (more on kie server can be found in previous blog series).
In version 6.4 there was new Server Management UI introduced to allow easy and efficient management of kie servers. This came with concept of server templates - that is a definition of the runtime environment regardless of how many physical instances are going to run with this definition. That in turn allow administrators to define partitioned environment where different server templates are representing different part of the organization or domain coverage.

Server template consists of:
  • name
  • list of assigned containers (kjars)
  • list of available capabilities
Once any kie server starts and connects to workbench (workbench acts as controller) then it will be presented in server management under remote servers. Remote servers reflects the current state of controller knowledge - meaning it only changes the list upon two events triggered by kie servers:
  • start of the kie server in managed mode - which connects to controller and registers itself as remote server
  • shutdown of the kie server in managed mode - which notifies controller to unregister itself from remote servers
With this setup users can create as many server templates as they need. Moreover at the same time each server template can be backed by as many kie server instances as it needs. That gives the complete control of how individual server templates (and buy that part of your business domain) can scale individually.

So enough of introduction, let's see how it got integrated for execution. Since there is no more execution server embedded in workbench, all that needs it will be carried on by kie server(s). To accomplish this workbench internally relies on two parts:
  • server management (that maintains server templates) to know what is available and where
  • kie server client to interact with remote servers
Server management is used to collect information about:
  • server templates whenever project is to be deployed - regardless if there are any remote servers or not - again this is just update to the definition of the server
  • remote servers whenever kie server interaction is required - start process, get process instances, etc
NOTE: in case of multiple server templates are available there will be selection box shown on the screen so users can decide which server template they are going to interact with. Again, users do not care about individual remote servers as they represent same setup so it's not important which server instance will be used for given request, as long as one of them is available.
Server templates that do not have any remote servers available won't be visible on the list of server templates.
And when there is only one server template selection is not required and that one becomes the default - for both deployment and runtime operations.


On right top corner users can find server template selection button in case there are more than one server template available. Once selected it will be preserved across screen navigation so it should be selected only once.

Build & Deploy has been updated to take advantage of the new server management as well, whenever users decide to do build and deploy:
  • if there is only one server template:
    • it gets selected as default
    • artifact name is used as container id
    • by default container is started
  • if there are more than one server templates available user is presented with additional popup window to select:
    • container id
    • server tempalte
    • if container should be started or not

That concludes the introduction and basic integration between kie server and workbench. Let's now look what's included and what's excluded from workbench point of view (or the differences that users might notice when switching from 6 to 7). 

First of all, majority of runtime operations are suppose to work exactly the same what, that includes:
  • Process definition view
    • process definition list
    • process definition details
    • Operations
      • start process instance (including forms)
      • visualize process definition diagram


  • Process instance view
    • process instance list (both predefined and custom filters)
    • process instance details
    • process instance variables
    • process instance documents
    • process instance log
    • operations
      • start process instance (including forms)
      • signal process instance (including bulk)
      • abort process instance (including bulk)
      • visualize process instance progress via diagram

  • Tasks instance view
    • task list (both predefined and custom filters)
    • task instance details (including forms)
    • life cycle operations of a task (e.g. claim, start, complete)
    • task assignment
    • task comments
    • task log

  • Jobs view
    • jobs list (both predefined and custom filters)
    • job details
    • create new job
    • depending on status cancel or requeue jobs


  • Dashboards view
    • out of the box dashboards for processes and tasks


All of these views retrieve data from remote kie server so that means there is no need for workbench to have any data sources defined. Even the default that comes with wildfly is not needed - not even for dashboards :) With that we have very lightweight workbench that comes with excellent authoring and management capabilities.

That leads us to a last section in this article that explains that changed and what was removed. 

Let's start with changes that are worth noting:
  • asynchronous processing of authoring REST operations has been moved from jbpm executor to UberFire async service - that makes the biggest change in cluttered setup where only cluster member that accepted request will know its status
  • build & deploy from project explorer is unified - regardless if the project is managed or unmanaged - there are only two options
    • compile - which does only in memory project build
    • build and deploy - which includes build, deploy to maven and provision to server template
Now moving to what was removed

  • since there is no jbpm runtime embedded in workbench there is no REST or JMS interfaces for jBPM, REST interfaces for the authoring part is unchanged (create org unit, repository, compile project etc)
  • jobs settings is no longer available as it does not make much sense in new (distributed) setup as the configuration of kie servers is currently done on server instance level
  • ad hoc tasks are temporally removed and will be introduced as part of case management support where it actually belongs
  • asset management is removed in the form it was known in v6 - the part that is kept is
    • managed repository that allows single or multi module projects
    • automatic branch creation - dev and release
    • repository structure management where users can manage their modules and create additional branches
    • project explorer still supports switch between branches as it used to
  • asset management won't have support for asset promotion, build of projects or release of projects
  • send task reminders - it was sort of hidden feature and more of admin so it's going to be added as part of admin interface for workbench/kie server


Enough talking (... writing/reading depends on point of view) it's time to see it in action. Following are two screen casts showing different use cases covered.

  • Case 1 - from zero to full speed execution
    • Create new repository and project
    • Create data object
    • Create process definition with user task and forms that uses created data object
    • Build and deploy the project
    • Start process instance(s)
    • Work on tasks
    • Visualize the progress of process instance
    • Monitor via dashboards



  • Case 2 - from existing project to document capable processes
    • Server template 1
    • Deploy already build project (translations)
    • Create process instance that includes document upload
    • Work on tasks
    • Visualize process instance details (including documents)

    • Server template 2
    • Deploy already build project (async-example)
    • Create process instance to check weather in US based on zip code
    • Work on tasks 
    • Visualize process instance progress - this project does not have image attached so it comes blank
    • Monitor via dashboards


Before we end a short note for these that want to try it out. Since we have integration with kie server and as you noticed it does not require any additional login to kie server and workbench uses logged in user, there is small need for configuration of Wildfly:
workbench comes with additional login module as part of kie-security.jar so to enable smooth integration when it comes to authentication, please declare in standalone.xml of your wildfly following login module:

so the default other security domain should look like this:
  <security-domain name="other" cache-type="default">
    <authentication>
       <login-module code="Remoting" flag="optional">
          <module-option name="password-stacking" value="useFirstPass"/>
        </login-module>
        <login-module code="RealmDirect" flag="required">
            <module-option name="password-stacking" value="useFirstPass"/>
        </login-module>
        <login-module code="org.kie.security.jaas.KieLoginModule" flag="optional"
                                module="deployment.kie-wb.war"/>
     </authentication>
  </security-domain>

important element is marked with red as it might differ between environments as it relies on the actual file name of the kie-wb.war. Replace it to match the name of your environment.

NOTE: this is only required for kie wb and not for kie drools wb running on wildfly. Current state is that this works on Wildfly/EAP7 and Tomcat, WebSphere and WebLogic might come later...

That's all for know, comments and ideas more than welcome

Knowledge Driven Microservices

$
0
0
In the area of microservices more and more people are looking into lightweight and domain IT solutions. Regardless of how you look at microservice the overall idea is to make sure it does isolated work, don't cross the border of the domain it should cover.
That way of thinking made me look into it how to leverage the KIE (Knowledge Is Everything) platform to bring in the business aspect to it and reuse business assets you might already have - that is:
  • business rules
  • business process
  • common data model
  • and possibly more... depending on your domain
In this article I'd like to share the idea that I presented at DevConf.cz and JBCNConf this year. 

Since there is huge support for microservice architecture out there in open source world, I'd like to present one set of tools you can use to build up knowledge driven microservices, but keep in mind that these are just the tools that might (and most likely will) be replaced in the future.

Tools box

jBPM - for process management
Drools - for rules evenalutaion
Vert.x - for complete application infrastructure binding all together
Hazelcast - for cluster support for distributed environments 

Use case - sample

Overall use case was to provide a basic back loan solution that process loan applications. So the IT solution is partitioned into following services:



















  • Apply for loan service
    • Main entry point to the loan request system
    • Allow applicant to put loan request that consists of: 
      • applicant name 
      • monthly income 
      • loan amount 
      • length in years to pay off the loan

    • Evaluate loan service
      • Rule based evaluation of incoming loan applications 
        • Low risk loan 
          • when loan request is for amount lower that 1000 it’s considered low risk and thus is auto approved 
        • Basic loan 
          • When amount is higher than 1000 and length is less than 5 years - requires clerk approval process 
        • Long term loan 
          • When amount is higher than 1000 and length is more that 5 years - requires manager approval and might need special terms to be established
    • Process loan service
      • Depending on the classification of the loan different bank departments/teams will be involved in decision making about given loan request 
        • Basic loans department 
          • performs background check on the applicant and either approves or rejects the loan 
        • Long term loans department 
          • requires management approval to ensure a long term commitment can be accepted for given application.

      Architecture

      • Each service is completely self contained 
      • Knowledge driven services are deployed with kjar - knowledge archives that provide business assets (processes, rules, etc)
      • Services talks to each other by exchanging data - business data 
      • Services can come and go as we like - dynamically increasing or decreasing number of instances of given service 
      • no API in the strict sense of its meaning - API is the data

      More if you're interested...

      Complete presentation from JBCNConf and video from DevConf.cz conference.



      Presentation at DevConf.cz



      In case you'd like to explore the code or run it yourself have a look at the complete source code of this demo in github.

      KIE Server (jBPM extension) brings document support

      $
      0
      0
      Another article for KIE Server series ... and what's coming in version 7. This time around documents and their use in business processes.

      Business processes quite frequently need collaboration around documents (in any meaning of it), thus is it important to allow users to upload and download documents. jBPM provided documents support in version 6 already though it was not exposed on KIE Server for remote interaction.

      jBPM 7 will come with support for documents in KIE Server - that covers both use within process context and outside - direct interaction with underlying document storage.


      jBPM and documents

      To recap quickly how document support is provided by jBPM

      Documents are considered process variables, as such they are applicable for the pluggable persistence strategies defined. Persistence strategies allow to provide various backend storage for process variables, instead of always be put together with process instance into jBPM data base.

      Document is represented by org.jbpm.document.service.impl.DocumentImpl type and comes with dedicated marshaling strategy to deal with this type of variables org.jbpm.document.marshalling.DocumentMarshallingStrategy. In turn marshaling strategy relies on org.jbpm.document.service.DocumentStorageService that is an implementation specific to document storage of your choice. jBPM comes with out of the box implementation of the storage service that simply uses file system as underlying storage system.
      Users can implement alternative DocumentStorageService to provide any kind of storage like data base, ECM etc.

      KIE Server in version 7, provides full support for described above usage - including pluggable DocumentStorageService implementations - and it extends it bit more. It provides REST api on top of org.jbpm.document.service.DocumentStorageService to allow easy access to underlying documents without a need to always go over process instance variables, though it still allows to access documents from within process instance.

      KIE Server provides following endpoints to deal with documents:

      • list documents - GET - http://host:port/kie-server/services/rest/server/documents
        • accept page and pageSize as query parameters to control paging
      • create document - POST - http://host:port/kie-server/services/rest/server/documents
        • DocumentInstance representation in one of supported format (JSON, JAXB, XStream)
      • delete document - DELETE - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}
      • get document (including content) - GET - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}
      • update document - PUT - http://host:port/kie-server/services/rest/server/documents
        • DocumentInstance representation in one of supported format (JSON, JAXB, XStream)
      • get content - GET - http://host:port/kie-server/services/rest/server/documents/{DOC_ID}/content

      NOTE: Same operations are also supported over JMS.

      Documents in action

      Let's see this in action, by just going over very simple use case:
      • Deploy translations project (that is part of jbpm-playground repository) to KIE Server
      • Create new translation process instance from workbench
      • Create new translation process instance from JavaScript client - simple web page
      • Download and remove documents from JavaScript client



      As it can be seen in above screencast, there is smooth integration between workbench, kie server and JavaScript client. Even more is that kie server accept all the data over single endpoint - no separate upload of the document and then start of the process. 

      Important note - be really cautious when using delete operation via KIE Server documents endpoint as it remove document completely meaning there will be no access to it from process instance (as presented in the screencast), moreover process instance won't be aware of it as it considers document storage as an external system.

      Sample source

      For those how would like to try it out themselves, here is a JavaScript client (a simple web page) that was used for the example screencast. Please make sure you drop it on kie server instance to not run into CORS related issues.

      <html>
      <head>
      <title>Send document to KIE Server</title>
      <style type="text/css">
      table.gridtable {
      font-family: verdana,arial,sans-serif;
      font-size:11px;
      color:#333333;
      border-width: 1px;
      border-color: #666666;
      border-collapse: collapse;
      }
      table.gridtable th {
      border-width: 1px;
      padding: 8px;
      border-style: solid;
      border-color: #666666;
      background-color: #dedede;
      }
      table.gridtable td {
      border-width: 1px;
      padding: 8px;
      border-style: solid;
      border-color: #666666;
      background-color: #ffffff;
      }
      </style>


      <script type='text/javascript'>
      var user = "";
      var pwd = "";
      var startTransalationProcessURL = "http://localhost:8230/kie-server/services/rest/server/containers/translations/processes/translations/instances";
      var documentsURL = "http://localhost:8230/kie-server/services/rest/server/documents";

      var srcData = null;
      var fileName = null;
      var fileSize = null;
      function encodeImageFileAsURL() {

      var filesSelected = document.getElementById("inputFileToLoad").files;
      if (filesSelected.length > 0) {
      var fileToLoad = filesSelected[0];
      fileName = fileToLoad.name;
      fileSize = fileToLoad.size;
      var fileReader = new FileReader();

      fileReader.onload = function(fileLoadedEvent) {
      var local = fileLoadedEvent.target.result; // <--- data: base64
      srcData = local.replace(/^data:.*\/.*;base64,/, "");


      console.log("Converted Base64 version is " + srcData);
      }
      fileReader.readAsDataURL(fileToLoad);
      } else {
      alert("Please select a file");
      }
      }

      function startTransalationProcess() {
      var xhr = new XMLHttpRequest();
      xhr.open('POST', startTransalationProcessURL);
      xhr.setRequestHeader('Content-Type', 'application/json');
      xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
      xhr.onreadystatechange = function () {
      if (xhr.readyState == 4 && xhr.status == 201) {
      loadDocuments();
      }
      }
      var uniqueId = generateUUID();
      xhr.send('{' +
      '"uploader_name" : "'+ document.getElementById("inputName").value +'",' +
      '"uploader_mail" : "'+ document.getElementById("inputEmail").value +'", ' +
      '"original_document" : {"DocumentImpl":{"identifier":"'+uniqueId+'","name":"'+fileName+'","link":"'+uniqueId+'","size":'+fileSize+',"lastModified":'+Date.now()+',"content":"' + srcData + '","attributes":null}}}');
      }


      function deleteDoc(docId) {
      var xhr = new XMLHttpRequest();
      xhr.open('DELETE', documentsURL +"/" + docId);
      xhr.setRequestHeader('Content-Type', 'application/json');
      xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
      xhr.onreadystatechange = function () {
      if (xhr.readyState == 4 && xhr.status == 204) {
      loadDocuments();
      }
      }

      xhr.send();
      }

      function loadDocuments() {
      var xhr = new XMLHttpRequest();
      xhr.open('GET', documentsURL);
      xhr.setRequestHeader('Content-Type', 'application/json');
      xhr.setRequestHeader ("Authorization", "Basic " + btoa(user + ":" + pwd));
      xhr.onreadystatechange = function () {
      if (xhr.readyState == 4 && xhr.status == 200) {
      var divContainer = document.getElementById("docs");
      divContainer.innerHTML = "";
      var documentListJSON = JSON.parse(xhr.responseText);
      var documentsJSON = documentListJSON['document-instances'];
      if (documentsJSON.length == 0) {
      return;
      }
      var col = [];
      for (var i = 0; i < documentsJSON.length; i++) {
      for (var key in documentsJSON[i]) {
      if (col.indexOf(key) === -1) {
      col.push(key);
      }
      }
      }
      var table = document.createElement("table");
      table.classList.add("gridtable");

      var tr = table.insertRow(-1);

      for (var i = 0; i < col.length; i++) {
      var th = document.createElement("th");
      th.innerHTML = col[i];
      tr.appendChild(th);
      }
      var downloadth = document.createElement("th");
      downloadth.innerHTML = 'Download';
      tr.appendChild(downloadth);
      var deleteth = document.createElement("th");
      deleteth.innerHTML = 'Delete';
      tr.appendChild(deleteth);

      for (var i = 0; i < documentsJSON.length; i++) {

      tr = table.insertRow(-1);

      for (var j = 0; j < col.length; j++) {
      var tabCell = tr.insertCell(-1);
      tabCell.innerHTML = documentsJSON[i][col[j]];
      }
      var tabCellGet = tr.insertCell(-1);
      tabCellGet.innerHTML = '<button id="button" onclick="window.open(\'' + documentsURL +'/'+documentsJSON[i]['document-id']+'/content\')">Download</button>';

      var tabCellDelete = tr.insertCell(-1);
      tabCellDelete.innerHTML = '<button id="button" onclick="deleteDoc(\''+documentsJSON[i]['document-id']+'\')">Delete</button>';
      }

      divContainer.appendChild(table);
      }
      }

      xhr.send();
      }


      function generateUUID() {
      var d = new Date().getTime();
      var uuid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
      var r = (d + Math.random()*16)%16 | 0;
      d = Math.floor(d/16);
      return (c=='x' ? r : (r&0x3|0x8)).toString(16);
      });
      return uuid;
      }
      </script>
      </head>
      <body>
      <h2>Start transalation process</h2>
      Name: <input name="name" type="text" id="inputName"/><br/><br/>
      Email: <input name="email" type="text" id="inputEmail"/><br/><br/>
      Document to translate: <input id="inputFileToLoad" type="file" onchange="encodeImageFileAsURL();" /><br/><br/>
      <input name="send" type="submit" onclick="startTransalationProcess();" /><br/><br/>
      <hr/>
      <h2>Available documents</h2>
      <button id="button" onclick="loadDocuments()">Load documents!</button>
      <div id="docs">

      </div>
      </body>
      </html>

      And as usual, share your feedback as that is the best way to get improvements that are important to you.

      Improved container handling and updates in KIE Server

      $
      0
      0
      KIE Server allows to deploy multiple kjars, even the same project but different versions. And that is actually quite common to add new version of the project (kjar) next to already running. That in turn enforces to have unique container ids for each project.
      Let's take an example - at the beginning we start with first project

      Group id: org.jbpm
      Artifact id:my-project
      Version:1.0

      This project has single process inside and once built we deploy it with container id set to my-project.

      Then we realized that the process needs an update (of whatever type) and thus we need to increase project version and again build and deploy it to KIE Server. So that gives us another project version

      Group idorg.jbpm
      Artifact id: my-project
      Version: 2.0

      Then we cannot deploy it with the same container id (my-project) so we need to change it to something else ... my-project2 (most likely ;))

      So what's wrong with that? Well, first of all naming convention is starting to be affected by the versioning schema used, which might be good or bad, depending how it's used. But more important is that clients interacting with these projects must be aware of the versions all the time.
      That in turn makes the client application to be bound to release cycle of the projects  in particular to their new versions (and by that processes and other assets).

      What can we do about it...?
      The improvement that comes in version 7 allows to define aliases for containers (which is runtime representation of kjar). Alias can be added to as many containers as needed and by default (when not given) uses artifact id of the project.
      Aliases are not constrained to the same group and artifact ids so project with different GA coordinates can use same alias. Then alias can be used all the time when interacting with KIE Server, the behavior differs depending on the operations performed as it might require some additional logic to figure out which is the actual container to be used. Let's examine these situations

      Starting new process instance

      As listed above unique container id disables clients to use single endpoint to be used to start process instances of given process id as they always need to provide container id which differs between versions. When alias is used instead client application can actually always use the same container alias (instead of container id) to always start latest version of the process. 

      To start process of latest version, KIE Server will get container alias and find all containers that declares that alias and then search for latest by comparing project versions - it's based on maven like version comparator though it only takes into account version and not group or artifact id.

      So if we deploy the first project and then issue following request:
      http://localhost:8230/kie-server/services/rest/server/containers/my-project/processes/evaluation/instances

      where: 
      • my-project is container alias
      • evaluation is process id
      it will then start new instance from org.jbpm:my-project:1.0 project.

      Next if we deploy the 2.0 version then issue exact same request (the URL) then the new instance will be started from org.jbpm:my-project:2.0 project.

      That gives us options to continuously deploy new versions and ensure that client who relies on our processes will always use latest version of the processes available in the system. It works always on the live information so if you then remove version 2.0 from KIE Server and start another instance it will be back on org.jbpm:my-project:1.0 project as it's the latest one available.

      Interacting with existing process instance

      Interaction with existing process instances depends on the process instance id. To be able to perform any operation on process instance its id must be given. Then based on that information KIE Server will identify correct container id to be used.
      Container id is needed to be able:
      • unmarshal incoming data (like variables)
      • find correct runtime manager 
      • marshal outgoing data
      Both incoming and outgoing data might refer to project specific types (that can change between versions) and thus it's important that the right one is used.

      Interacting with tasks

      Similar to process instances, interaction with tasks is dependent on the task id. KIE Server will locate proper container by task id to be able to correctly deal with the requests. Container id is used for exact same operations as in process instance case (unmarshall and marshal data and find correct runtime manager).

      Interacting with process definition image and forms

      Interacting with process definition image and process forms will work same as start process - returns always the latest one when using container alias.

      Below you can see this in action in the following screen cast. This screencast illustrates use with workbench integrated with KIE Server so as you can see when you press build and deploy you will be given all the details already filled in:
      • container id is artifactId _ version
      • alias is artifact id
      Although you're in full control and can change both defaults to whatever you need.



      And again, container aliases are set either explicitly (if given when creating containers) or implicitly (based on artifact id). Although that does not mean you have to use them. In some case container ids are good enough and they will still work the same way as they do now.

      Hopefully this will bring another reason to move to KIE Server and start using its full potential :)


      Case management - jBPM v7 - Part 1

      $
      0
      0
      This article starts a new series of blog posts about case management feature that is coming in jBPM v7 to illustrate its capabilities with complete examples that will get more complex/advanced on each part.

      One of the most frequently requested features in jBPM is so called Case Management. Case management can mean different things depending who you talked to so I'd like to start with small scope definition what does it mean in context of jBPM (at the moment as that might change based on feedback, supported features and use cases and further evolution).

      Case management can be best described when compared to business processes. Business processes are usually modeled as flow charts with clearly defined paths to reach a business goal. These processes usually have one (might have more) starting points and are structurally connected to build end to end flow of work and data.



      While cases are more dynamic, they provide room for improvements as the case evolve without the need to foresee all possible actions in advance. So case definition usually consists of loosely coupled process fragments that can be connected (directly on indirectly) to lead to certain milestones and finally business goal.

      Looking at different notations that can be used for case management, processes and cases might be represented differently:

      • BPMN2
      • CMMN
      jBPM comes with cases support based on BPMN2 as most users are familiar with this notation and most if not all features can be represented with BPMN2 constructs. That's at least a starting point which might be revisited further on. A good comparison between BPMN2 and CMMN was published by Bruce Silver.

      These article series will introduce readers to case management support gradually with more features as we go to not provide too much details at once and let the features described be backed with examples that can be seen (screencast) and executed on the actual environment with jBPM v7.

      Case project

      First thing to start with, is to create Case project - it's a special type of project in KIE workbench that is on top of regular project to configure it for the case management:
      • set runtime strategy to Per Case
      • configure marshallers for case file and documents
      • create WorkDefinition.wid files in the project and its packages to ensure case related nodes (e.g. Milestone) are available in palette 


      Case definition

      So let's start with basic case definition example that covers following use case - IT hardware orders. As in any company, there is a need from time to time to order new IT equipment - such as computers, phones, etc. This kind of system can be represented with a good case management as they usually deal with a bit of dynamic decisions that might influence the flow. 

      Case definition is created in authoring perspective in KIE workbench - it expects name, location and optionally case ID prefix. What's that? Case ID prefix is configurable element that allows to easily distinguish different types of cases. Default mechanism is that the prefix is then followed with generated id in following format:

      ID-XXXXXXXXXX

      where X is generated number to produce unique id with the prefix. If prefix is not given it defaults to CASE and then each subsequent instance of that case definition will be:
      CASE-0000000001
      CASE-0000000002
      CASE-0000000003

      or when prefix is set to HR
      HR-0000000001
      HR-0000000002
      HR-0000000003

      Case definition is always an adhoc process definition meaning it is a dynamic process so does not require to have explicit start nodes.

      Once the clean definition is created, it's time to define roles involved in the usual case of ordering new IT hardware:
      • owner - is the person who requests the hardware (can be only one)
      • manager - is direct manager of the owner to approve the requested hardware
      • supplier - set of people that can order and deliver physical equipment (usually more than one)
      When the roles are known, case management must ensure that these are not hardcoded to single set of people/groups as part of case definition and that it can differ per each case instance. This is where case role assignments come into the picture and can be:
      • given when case starts
      • set at any given point in time while case is active
      • removed at any given point in time while case is active
      second and third option does not alter the task assignments for already active tasks.

      What is important to note here, is that in case management users should always use roles for task assignments instead of actual user/group names, that is to make the case as dynamic as possible so actual user/group assignment is done as late as possible. It's similar to process variables though without expression syntax (#{variable}).

      Let's take a look at our case definition:


      So what do we have here? First thing that is directly seen is - no start nodes of the process. Does that mean there is no way to tell what is going to be triggered when new instance of this case definition is created?
      Quite the opposite - nodes that have no incoming connections and are marked as Adhoc Autostart (a property of a node) will be automatically triggered when instance is started.

      In this case these are:
      • Prepare hardware spec
      • Milestone 1: Order placed
      Both of these nodes are wait states, meaning they are triggered but they are not left, they wait for further action:
      • Prepare hardware spec - wait for supplier to provide the spec and complete the task
      • Milestone 1: Order placed - wait for condition to be met - there is a case file variable named "ordered" with value true
      Hmmm, but what is a case file then? Case File is like a bucket for data for entire case instance. Since case can span across number of process instances, instead of coping data back and forth (that first of all might be expensive and second can lead to use of out of date information) process instance can write and read from case file that is accessible to all process instance that belong to the same case. CaseFile is stored in working memory and thus is persiteable same as ksession and process instance - meaning can use marshaling strategies to store in different places e.g. documents, JPA entities etc. Though what's more important - it is a fact in working memory and thus can be subject for rules.

      Milestone actually uses case file as condition to trigger only if there is a ordered variable available in case file and its value is true. Only then milestone will be completed and will follow to next node.

      Another worth noting part is the end signals that are at the end of Milestone 1 and Milestone 2 fragments. These signals are responsible for triggering next Milestone in line, but again, only triggering and not completing it as they will wait on condition. The scope of signal is process instance only so completing Milestone 1 in first case instance will not cause any side effects on other active case instances of the same definition.

      Here is a complete design of this project and case definition as screencast.





      Complete source code of this project (and the entire repository) can be found here. This repository can be cloned directly to workbench for build and deploy.

      ... speaking of build and deploy....

      The project can be directly build and deploy in workbench and (assuming you have KIE Server connected to workbench) provisioned to execution environment where it can be started and worked on.

      At the moment workbench does not provide any case management UI, thus we will use REST calls to start a case and put data into case file but we can use workbench for user task interaction and overall monitoring - process instance logs, process instance image, active nodes, etc.

      Start new case

      To start a new case use following endpoint:
      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

      where

      • itorders is the container alias that was deployed to KIE Server
      • itorders.orderhardware is case definition id
      As described above, at the time when new case is started it should provide basic configuration - role assignments:

      POST body::
      {
        "case-data" : {  },
        "case-user-assignments" : {
          "owner" : "maciek",
          "manager" : "maciek"
        },
        "case-group-assignments" : {
          "supplier" : "IT"
       }
      }

      At the moment case-data is empty as we don't supply any data/information to the case. But we do configure our defined roles. Two of them are user assignments (as can be seen in the above screen cast they are referenced in Actor property of user tasks) and third is group assignments (as it is referenced in Groups property of user task).

      Once successfully stared it will return case ID that should look like
      IT-0000000001

      Then this case can already be seen in process instance list in workbench, and its tasks should be available in task perspective. So the tasks can be completed and various milestones will be achieved until it reaches the Milestone that requires shipped variable to be present in case file.

      Insert case file data

      Case file data can be easily inserted into active case using REST api.
      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/caseFile/shipped

      where
      • itorders is the container alias that was deployed to KIE Server
      • IT-0000000001 is the unique id of a case instance
      • shipped is the name of the case file variable to be set/replaced
      POST body::
      true

      Same should be later repeated to insert "delivered" case file variable to achieve Milestone 3 and move to final task - Customer Satisfaction Survey. And that's all for this basic case example.

      Execution in action can be found in this screencast



      Comments and ideas more than welcome. And in addition, contribution to what cases should be provided as example are wanted!

      Case management - jBPM v7 - Part 2 - working with case data

      $
      0
      0
      In Part 1, basic concepts around case management brought by jBPM 7 were introduced. It was a basic example (it order handling) as it was limited to just moving through case activities and basic data to satisfy milestone conditions.

      In this article, case file will be described in more details and how it can be used from within a case and process. So let's start with quick recap of variables available in processes.

      There are several levels where variables can be defined:

      • process level - process variable
      • subprocess level - subprocess variable
      • task level - task variable
      Obviously the process level is the entry point where all other take the variables from. Meaning if a process instance creates subprocess it will usually include mapping from process level to subprocess level. Similar for tasks, tasks will get variables from process level. 
      In such case it means the variable is copied to ensure isolation for each level. That is, in most of the cases, the desired behavior unless you need to keep all variables always up to date, regardless of the level they are in. That, in turn, is usual situation in case management which expects always the most up to date variables at any time in the case instance, regardless of their level.

      So that's why case management in jBPM is equipped with Case File, which is then only one for entire case, regardless how many process instances compose the case instance. Storing data in case file promotes reuse instead of copy, so each process instance can take variable directly from case file and same for updates. There is no need to copy variable around, simply refer to it from your process instance.

      Support for case file data is provided at design time by marking given variable as case file


      as can be seen in the above screenshot, there is variable hwSpec marked as case file variable. And the other (approved) is process variable. That means hwSpec will be available to all processes within a case, and moreover it will be accessible directly from a case file even without process instance involvement.

      Next, case variables can be used in data input and output mapping


      case file variables are prefixed with caseFile_ so the engine can properly handle it. Though a simplified version (without the prefix) is expected to work as well. Though for clarity and readability it's recommended to always use the prefix.

      Extended order hardware example

      In part 1, there was a very basic, with no data case definition for handling order of IT hardware. In this article we extend the example to illustrate:
      • use of case file variables
      • use of documents
      • share of the information between process instances via case file
      • use business process (via call activity) to handle placing order activity


      Following screencast shows entire design time activities to extend the part 1 example, including awesome feature to copy entire project!




      So what was done here:

      • create new business process - place-order that will be responsible for placing order activity instead of script task from previous example
      • define case file variables:
        • hwSpec - which is a physical document that needs to be uploaded
        • ordered - which is indication for Milestone 1 to be achieved 
      • replace script task for Place order activity with reusable subprocess - important to note is that there are no variables mapping in place, all is directly taken from case file
      • generate forms to handle file upload and slightly adjust their look

      With these few simple steps our case definition is enhanced with quite a bit of new features making it's applicability much better. It's quite common to include files/documents in a case, though they should be still available even if given process instance that uploaded them is gone. And that's provided by case file that is there as long as case instance was not destroyed.

      Let's now run the example to see it in action




      The execution is similar as it was in part one, meaning to start the case we need to use REST api. A worth noting part here is that we made a new version of the project:

      •  org.jbpm.demo
      • itorders
      • 2.0
      and then it was deployed on top of the first version, in exact same kie server. Even though there are both versions running the URL to start the case didn't change:


      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

      where

      • itorders is the container alias that was deployed to KIE Server
      • itorders.orderhardware is case definition id

      Method: POST

      As described above, at the time when new case is started it should provide basic configuration - role assignments:

      POST body::
      {
        "case-data" : {  },
        "case-user-assignments" : {
          "owner" : "maciek",
          "manager" : "maciek"
        },
        "case-group-assignments" : {
          "supplier" : "IT"
       }
      }

      itorders is an alias that when used will always select the latest version of the project. Though if there is a need to explicitly pick given version then simply replace the alias with the container id (itorders_2.0 or itorders_1.0)

      Once the process is started supplier (based on role assignment - IT group) will have task to complete to provide hardware specification - upload a document. Then manager can review the specification and approve (or not) the order. Then it goes to subprocess to actually handle the ordering, which once done will store status into case file which will then trigger milestone 1.

      Throughout all these user oriented activities, case file information (hwSpec) is shared without any need to copy that around. Moreover, there was no need to configure anything to handle documents either, that is all done by creating a case project that by default sets up everything that is needed.

      At any time (as long as case was not destroyed) you can get the case file to view the data. This can be retrieved by following endpoint

      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/caseFile 

      where:
      • itorders - is the container alias
      • IT-0000000001 - is the case ID
      Method: GET



      With this, part 1 concludes with a note that since it is bit enhanced order hardware case, it's certainly not all that you can do with it so stay tuned for more :)

      Try it yourself

      As usual, complete source code is located in github.

      Case management - jBPM v7 - Part 3 - dynamic activities

      $
      0
      0
      It's time for next article in "Case Management" series, this time let's look at dynamic activities that can be added to a case on runtime. Dynamic means process definition that is behind a case has no such node/activity defined and thus cannot be simply signaled as it was done for some of the activities in previous articles (Part 1 and Part 2).

      So what can be added to a case as dynamic activity?

      • user task
      • service task - which is pretty much any type of service task that is implemented as work item 
      • sub process - reusable

      User and service tasks are quite simple and easy to understand, they are just added to case instance and immediately executed. Depending of the nature of the task, it might start and wait for completion (user task) or it might directly finish after execution (service tasks). Although most of the service tasks (as defined in BPMN2 spec - Service Task) will be invoked synchronously it might be configured to run in background or even wait for external signal to be completed - all depends on the implementation of the work item handler.
      Sub process is slightly different in the expectations process engine will have - process definition that is going to be created as dynamic process must exists in kjar. That is to make sure process engine can find that process by its id to execute it. There are no restriction on what the subprocess will do, it can be synchronous without wait states or it can include user tasks or other subprocesses. Moreover such created subprocess will have correlation key set with first property being the case id of the case where the dynamic task was created. So from case management point of view it will belong to that case and thus see all case data (from case file - see more details about case file in Part 2).

      Create dynamic user task

      To create dynamic user task there are few things that must be given:
      • task name
      • task description (optional though recommended to be used)
      • actors - list of comma separated actors to assign the task, can refer to case roles for dynamic resolving 
      • groups - same as for action but referring to groups, again can use case roles
      • input data - task inputs to be available to task actors
      Dynamic user task can be created via following endpoint:

      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

      where 
      • itorders is container id
      • IT-0000000001 is case id
      Method::
      POST

      Body::
      {
       "name" : "RequestManagerApproval",
       "data" : {
       "reason" : "Fixed hardware spec",
       "caseFile_hwSpec" : "#{caseFile_hwSpec}"
       }, 
       "description" : "Ask for manager approval again",
       "actors" : "manager",
       "groups" : "" 
      }

      this will then create new user task associated with case IT-000000001 and the task will be assigned to person who was assigned to case role named manager. This task will then have two input variables:
      • reason
      • caseFile_hwSpec - it's defined as expression to allow runtime capturing of process/case data
      There might be a form defined to provide user friendly UI for the task which will be then looked up by task name - in this case it's RequestManagerApproval (and the form file name should be RequestManagerApproval-taskform.form in kjar).

      Create dynamic service task

      Service tasks are slightly less complex from the general point of view, though they might need more data to be provided to properly perform the execution. Service tasks require following things to be given:
      • name - name of the activity
      • nodeType - type of a node that will be then used to find the work item handler
      • data - map of data to properly deal with execution
      Service task can be created with the same endpoint as user task with difference in the body payload.
      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

      where 
      • itorders is container id
      • IT-0000000001 is case id
      Method::
      POST

      Body::
      {
       "name" : "InvokeService",
       "data" : {
       "Parameter" : "Fixed hardware spec",
       "Interface" : "org.jbpm.demo.itorders.services.ITOrderService",
       "Operation" : "printMessage",
       "ParameterType" : "java.lang.String"
       }, 
       "nodeType" : "Service Task"
      }

      In this example, an java based service is executed. It consists of an interface that is a public class org.jbpm.demo.itorders.services.ITOrderService, public printMessage method with single argument of type String. Upon execution Parameter value is passed to the method for execution.

      Number and names, types of data given to create service tasks completely depends on the implementation of service task's handler. In this example org.jbpm.process.workitem.bpmn2.ServiceTaskHandler was used.

      NOTE: For any custom service tasks, make sure handler is registered in deployment descriptor in WorkItem Handler section where the name is same as nodeType used when creating a dynamic service task.

      Create dynamic subprocess

      Dynamic subprocess expects only optional data to be provided, there are no special parameters as for tasks so it's quite straight forward to be created. 

      Endpoint::
      http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/processes/itorders-data.place-order

      where 
      • itorders is container id
      • IT-0000000001 is case id
      • itorders-data.place-order is the process id of the process to be created
      Method::
      POST

      Body::
      {
       "any-name" : "any-value"
      }

      Mapping of output data

      Typically when dealing with regular tasks or subprocesses to map output data, users define data output associations to instruct the engine on what output of the source (task or sub process instance) to be mapped to what process instance variable. Since dynamic tasks do not have data output definition there is only one way to put output from task/subprocess to the process instance - by name. This means the name of the returned output of a task must match by name process variable to be mapped. Otherwise it will ignore that output, why is that? It's to safe guard case/process instance of putting unrelated variables and thus only expected information will be propagated back to case/process instance.

      Look at this in action

      As usual, there are screen casts to illustrate this in action. First comes the authoring part that shows:
      • creation of additional form to visualize dynamic task for requesting manager approval
      • simple java service to be invoked by dynamic service task
      • declaration of service task handler in deployment descriptor


      Next, it is shown how it actually works in runtime environment (kie server)




      Complete project that can be imported and executed can be found in github.

      So that concludes part 3 of case management in jBPM 7. Comments and ideas more than welcome. And that's still not all that is coming :)

      Administration interfaces in jBPM 7

      $
      0
      0
      In many cases, when working with business processes, users end up in situations that were not foreseen before, e.g. task was assigned to a user that left the company, timer was scheduled with wrong expiration time and so on.

      jBPM from it's early days had the capabilities to deal with these though it required substantial knowledge on how to use low level apis of jBPM. These days are now over, jBPM version 7 comes with administration api that cover:

      • process instance operations
      • user task operations
      • process instance migration

      These administration interfaces are supported in jBPM services and in KIE Server so users have full power of performing quite advanced operations when utilizing jBPM as process engine regardless if that is embedded (jbpm services api) or as a service (KIE Server).

      Let's start quickly by looking what sort of capabilities each of the service provide.

      Process instance Administration


      Process instance administration service provides operations around the process engine and individual process instance, following is complete list of operations supported and their short description:
      • get process nodes - by process instance id - this returns all nodes (including embedded subprocesses) that exists in given process instance. Even though the nodes come from process definition it's important to get them via process instance to make sure that given node exists and have valid node id so it can be used with other admin operations successfully
      • cancel node instance - by process instance id and node instance id - does exactly what the name suggests - cancels given nodes instance within process instance
      • retrigger node instance - by process instance id and node instance id - retrigger by first canceling the active node instance and create new instance of the same type - sort of recreates the node instance
      • update timer - by process instance id and timer id - updates timer expiration of active timer. It updates the timer taking into consideration time elapsed since the timer was scheduled. For example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 1,5 hour from the time it was updated. Allows to update
        • delay - duration after timer expires
        • period - interval between timer expiration - applicable only for cycle timers
        • repeat limit - limit the expiration to given number - applicable only for cycle timers
      • update timer relative to current time - by process instance id and timer id - similar to regular update time but the update is relative to the current time - for example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 2 hours from the time it was updated.
      • list timer instances - by process instance id - returns all active timers found for given process instance
      • trigger node - by process instance id and node id - allows to trigger (instantiate) any node in process instance at any time.

      Complete ProcessInstanceAdminService can be found here.
      KIE Server client version of it can be found here.


      User task administration


      User task administration mainly provides useful methods to manipulate task assignments (users and groups), data handling and automatic (time based) notifications and reassignments. Following is complete list of operations supported for user task administration service:
      • add/remove potential owners - by task id - supports both users and groups with option to remove existing assignment
      • add/remove excluded owners - by task id - supports both users and groups with option to remove existing assignment
      • add/remove business administrators  - by task id - supports both users and groups with option to remove existing assignment
      • add task inputs - by task id - modify task input content after task has been created
      • remove task inputs - by task id - completely remove task input variable(s)
      • remove task output - by task id - completely remove task output variable(s)
      • schedules new reassignment to given users/groups after given time elapses - by task id - schedules automatic reassignment based on time expression and state of the task:
        • reassign if not started (meaning when task was not moved to InProgress state)
        • reassign if not completed (meaning when task was not moved to Completed state)
      • schedules new email notification to given users/groups after given time elapses - by task id - schedules automatic notification based on time expression and state of the task:
        • notify if not started (meaning when task was not moved to InProgress state)
        • notify if not completed (meaning when task was not moved to Completed state)
      • list scheduled task notifications - by task id - returns all active task notifications
      • list scheduled task reassignments - by task id - returns all active tasks reassignments
      • cancel task notification - by task id and notification id - cancels (and unschedules) task notification
      • cancel task reassignment - by task id and reassignment id - cancels (and unschedules) task reassignment
      NOTE: all user task admin operations must be performed as business administrator of given task - that means every single call to user task admin service will be checked in terms of authorization and only business administrators of given task will be allowed to perform the operation.

      Complete UserTaskAdminService can be found here.
      KIE Server client version of it can be found here.


      Process instance migration


      ProcessInstanceMigrationService provides administrative utility to move given process instance(s) from one deployment to another or one process definition to another. It’s main responsibility is to allow basic upgrade of process definition behind given process instance. That might include mapping of currently active nodes to other nodes in new definition.

      Migration does not deal with process or task variables, they are not affected by migration. Essentially process instance migration means a change of underlying process definition process engine uses to move on with process instance.

      Even though process instance migration is available it’s recommended to let active process instances finish and then start new instances with new version whenever possible. In case that approach can’t be used, migration of active process instance needs to be carefully planned before its execution as it might lead to unexpected issues.Most important to take into account is:
      • is new process definition backward compatible?
      • are there any data changes (variables that could affect process instance decisions after migration)?
      • is there need for node mapping?
      Answers to these questions might save a lot of headache and production problems after migration. Best is to always stick with backward compatible processes - like extending process definition rather than removing nodes. Though that’s not always possible and in some cases there is a need to remove certain nodes from process definition. In that situation, migration needs to be instructed how to map nodes that were removed in new definition in case active process instance is at the moment in such a node.

      Complete ProcessInstanceMigrationService can be found here.
      KIE Server version of it can be found here.

      With this, I'd like to emphasize that administrators of jBPM should be well equipped with enough tools for the most common operations they might face. Obviously that won't cover al possible cases so we're are more than interested in users feedback on what else might be there as admin function. So share it!

      Pluggable container locator and policy support in KIE Server

      $
      0
      0
      In this article, a container locator support was introduced. Commonly known as aliases. At that time it was by default using latest available version of the project configured with same alias. This idea was really well received and thus further enhanced.

      Pluggable container locator


      First of all, not always latest available container is the way to go. There might be a need to have time bound container selection for given alias, for example:

      • there are two containers for given alias
      • even though there is a new version already deployed it should not be used until predefined date

      so users might implement their own container locator interface and register it by including that implementation bundled into a jar file on KIE Server class path. As usual, the discovery mechanism is based on ServiceLoader so the jar must include:
      • implementation of ContainerLocator interface
      • file named org.kie.server.services.api.ContainerLocator must be placed in META-INF/services directory
      • include fully qualified class name of the ContainerLocator implementation in META-INF/services/org.kie.server.services.api.ContainerLocator file
      Since there might be multiple implementation present on class path, container locator to be used needs to be given via system property:
      • org.kie.server.container.locator where the value should be class name of the implementation of ContainerLocator interface - simple name not FQCN
      that will be then used instead of default latest container locator.

      so far so good, but what should happen with containers that are now idle or should not be used anymore? Since the container locator will make sure that selected (by default latest) container is going to be used in most of the cases, there might be containers that do no need to be on runtime any longer. Especially important in environments where new versions of containers are frequently deployed which might lead to increased use of memory. Thus efficient cleanup of not used containers is a must. 

      Pluggable policy support

      For this a policy support was added, but not only for this as policies are general purpose tool within KIE Server. So what's that?

      Policy is a set of rules that are applied by KIE Server periodically. Each policy can be registered at different time to be applied. Policies are discovered when KIE Server starts and are registered but are not started by default. 
      The reason for this is that the discovery mechanism (ServiceLoader) is based on class path scanning and thus are always performed regardless if they should be used or not. So there is another step required to make the policy to be activated. 

      Policy activation is done by system property when booking KIE Server:
      • org.kie.server.policy.activate - where value is a comma separated list of policies (their names) to be activated
      When policy manager activates given policy it will respect its life cycle:
      • will invoke start method of the policy
      • will retrieve interval from the policy (invoke getInterval method)
      • schedule periodic execution of that policy based on given interval - if interval is less than 1 it will ignore that policy
      NOTE: scheduling is done based on interval for both first execution and then repeatable executions - meaning first execution will take place after interval. Interval must be given in milliseconds.

      Similar, when KIE Server stops it will call stop method of every activated policy to properly shut it down.

      Policy support can be used for various use cases, one that comes out of the box is to complement container locator (with its default latest only). So there is a policy available in KIE Server that will undeploy other container than latest. This police by default is applied once a day, but can be reconfigured via system properties. KeepLatestContainerOnlyPolicy will attempt to dispose containers that have lower versions, though it might fail on such attempt. Reasons of the failure might vary but most common will be when there are active process instances for container that is being disposed. In that case the container is left as started and the next day (or after another reconfigured period of time) the attempt will be retried. 

      NOTE: KeepLatestContainerOnlyPolicy is aware of controller so it will notify controller that the policy was applied and stop container in controller only, but only stop it and not remove it. Same as any policy this one must be activated via system property as well.

      This opens the door for tremendous amount of policy implementations, starting with cleanup, through blue-green deployments and finishing at reconfiguring runtimes. All performed periodically and automatically by KIE Server itself.

      As always, comments and further ideas are more than welcome.

      KIE Server router - even more flexibility

      $
      0
      0
      Yet another article from KIE Server series. This time to tackle next steps when you already get yourself familiar with KIE Server and its capabilities.

      KIE Server promotes architecture where there are many KIE Server instances responsible for running individual projects (kjars). Than in turn might be completely independent domains or the other way around - related to each other but separated on different runtime to avoid negative impact on each other.
      Untitled Diagram.png


      At this point, the client application needs to be aware of all the servers to properly interact with the servers. To put it in details client application needs to know:
      • location (url) of HR KIE Server
      • location (url) of IT KIE Server 1 and IT KIE Server 2
      • containers deployed to HR KIE Server
      • containers deployed to IT KIE Server (just one of them as they are considered to be homogeneous)
      While knowing about available containers is not so difficult and can be retrieved from running server, knowing about all the locations is more tricky. Especially in dynamic (cloud) environments where servers can come and go based on various conditions.

      To deal with these problems, KIE Server introduces new component - KIE Server Router. Router is responsible to bridge all KIE Servers grouped under same router to provide unified view of all servers. Unified view consists of:
      • Find the right server to deal with requests
      • Aggregate responses from different servers
      • Provide efficient load balancing
      • Deal with changing environment - like added/removed server instances

      Then the only thing client knows is the location of the router. Router then exposes most of the capabilities of the KIE Server over HTTP. It comes with two main responsibilities:
      • proxy to the actual KIE Server instance based on contextual information - container id or alias
      • aggregator of data - to collect information from all distinct server instances in single client request

      Untitled Diagram-2.png

      There are two types of requests KIE Server Router supports from client perspective:
      • Modification requests - POST, PUT, DELETE - HTTP methods are all considered as such. Main requirement to be properly proxied is that it includes container id (or alias) in the URL
      • Retrieval requests - GET - HTTP method are seen as that as well. Though when they do include container id will be handled the same way as modification requests.
      There is additional type of requests - administration requests - that KIE Server Router supports and these are strictly to allow router to function properly within changing environment.
      • Registers new servers and containers when server or container starts on any of the KIE Server instance
      • Unregisters existing servers and containers when server or container stops on any KIE Server instance
      • List available configuration of the router - what servers and containers it is aware of
      Router itself has very limited information and the most important is to be able to route to correct server instance based on container id. So the assumption it has there will be only one set of servers that will host given container id/alias. Although this does not mean it's only single server. It can be as many servers as needed that can be dynamically added and removed. Proxy will load balance for all known servers for given container.

      Other than the available containers and servers, KIE Server Router does not keep any information. This might not cover all possible scenarios but it does cover quite few of them.

      Untitled Diagram-3.png

      KIE Server Router comes in two pieces:
      • proxy that acts like a server 
      • client that is included in kie server to integrate with the proxy
      Router client is responsible to bind into KIE Server life cycle and send notifications to KIE Server Router when configuration is changed:
      • When container is started (successfully) it will register it in the router
      • When container is stopped (successfully) it will unregister it from router
      • When entire server instance is stopped, it will unregister all containers (that are in state started) from router

      KIE Server router client is packaged in KIE Server itself but by default is deactivated. It can be activated by setting router URL via system property:
      org.kie.server.router 
      Which can be one or more valid HTTP urls pointing to one or more routers this server should be registered in.

      KIE Server router exposes api that is completely compatible with KIE Server Client interface so you can use the java client to talk to router as you would do when talking to any KIE Server instance. Though it has some limitations:
      • Router cannot be used to deploy new containers - this is due to it will not know given container id yet and thus won’t be able to decide in which server it should be deployed to
      • Router cannot deal with modification requests to endpoints of KIE Server that is not based on container id
        • Jobs
        • Documents
      • Router will return hard coded response when requesting KIE Server info
      See basic KIE server Router capabilities in action in following screen cast:


      Response aggregators

      Retrieval requests are responsible for collecting data from various sources but they must return all the data aggregated into single response - that is well structured. Here response aggregators come into the picture. There are dedicated response aggregators that are per data format:
      • JSON
      • JAXB
      • Xstream 
      XML based aggregators (both JAXB and Xstream) use Java SE xml parsers with some hints on what elements are the subject for aggregation. While JSON uses org.json library (which is the smallest one) to aggregate JSON responses.

      All aggregated responses are compatible with data model returned by KIE Server and thus can be consumed by KIE Server Client without any issues.

      Aggregators support both sorting and pagination of the aggregated results. It does the aggregation and sorting and paging on the router side. Though the initial sorting is done on the actual KIE Server instances as well to make sure it is properly respected on the source data.

      Paging on the other side is bit more tricky as it needs to ask KIE Servers to always give from page 0 up to the requested one to properly take into consideration all KIE Servers before returning requested page.

      See paging and sorting in action in following screencast.



      That concludes quick tour about KIE Server Router that should provide more flexibility in dealing with more advanced environments with KIE Servers.

      Comments, questions, ideas as usual - welcome

      Traditional vs Modern BPM - why should I care?

      $
      0
      0
      Traditional BPM vs. modern ... what is this about?

      In this article I'd like to touch a bit BPM in general and how it evolves to mainly ask the question - should I still care about BPM? Hmmm ... let's look into it then...

      Proper business context put on top of generic process engine (or case management) is complete game changing activity. Users don't see this as a huge "server" installation but instead see what they usually work with - their domain. By domain I mean naming convention, information entities etc instead of generic BPM terms such as:

      • process
      • process instance
      • process variables
      Instead of trying to make business people (from different domains, sometimes completely unrelated) to unify on the terminology and being process engine/bpm oriented, modern BPM solution should aim at making themselves being part of the ecosystem rather then be outsider. This does make a lot of sense when you look at it from end user point of view. In many cases end users would like:
      • to feel comfortable in the system that supports my daily work
      • to understand the terminology used in the system - non IT language preferred 
      • to be able to extend its features and capabilities easily 
      • possible to federate with other systems - aggregated UI approach 
      these are just few points and I am sure many business users could come up with quite a long list in this area. 

      Nevertheless, what's is the point here? Main one is to not look at BPM as traditional setup anymore but use it to innovate your domain by utilizing the knowledge that is present in each and every coworker in your company. The assets that modern BPM should promote is collectively gathered knowledge of your domain and the way you work to make profit. 

      I used term 'traditional BPM', what I mean by that is rather big and complex installation of BPM Platforms of any kind. A centralized servers that are meant to do all the heavy lifting for entire company without much of an effort ... at least that what slides say, right? :) Traditional BPM that aimed at solving all problems did not really paid off as it was expected mainly due to complexity it brought. It had few major drawbacks identified throughout number of years:
      • difficult to learn - usually a big product suites
      • difficult to maintain - complexity grow with deployments on the platform where upgrades or maintenance activities become harder and harder
      • difficult to scale - usually because of chosen centralized architecture (which btw was meant to solve the problems with maintainability)
      • generated components had their limitation - first thing that end users end up with was the UI components were either tightly coupled to the product or not powerful enough
      Traditional BPM

      Due to those BPM initiatives in organizations were usually expensive and time consuming activities. Moreover, many of them failed to deliver because of the outweighs between expectations/promises and the actual capabilities of the product/solution.  
      BPM projects often made a promise to bridge the gap between IT and business but in the end (one way or another) IT took control and drifted away of business making the delivered solution not so business focused any more. Some of these were caused by the limitation mentioned above but some were because the chosen architecture was simply not suitable for the needs of the business domain. 

      I keep saying "business domain" or "business context" and I really mean it - this is the most important aspect of the work when dealing with business (ops ... I did it again) knowledge. The key to the success is the knowledge to be used in IT solution and not vice versa (IT solution altering the way business is done to fit the product/technology).

      So if the traditional BPM does not meet its promises, should I still care about BPM at all?

      The short answer is yes, though look for alternatives on how to use BPM - as I call it modern BPM. The slightly longer answer is - traditional BPM did have a success stories as well so it does provide a solid ground for next generation of BPM solutions. It did give proper and stable specifications:
      • BPMN2
      • DMN
      • CMMN
      to just name the few. So conceptually it is still valid and useful, what is more important is how it is actually realized. The modern BPM is about scoping your solutions to the domain, in other words proper partitioning of your domain will help you solve or overcome some of the issues exposed by traditional BPM. 

      First and foremost principles of the modern BPM is

      avoid centralization 

      Don't attempt to make huge "farm like" installation that should deal with all your processes within the organization (regardless of its size) - sooner or later it will grow ... Keep it small to the minimum to cover given domain or part of the domain. If your domain is Human Resources think about partitioning it to smaller pieces:
      • employee catalogue 
      • payroll 
      • absence 
      • contract management
      • benefits and development plan
      • etc
      with that separation you can easily
      • evolve in isolation
      • maintain separately - upgrades, deployments etc
      • scale individual parts
      • federate systems to build portal like entry points for end users
      • use different technology stack/products for individual parts

      always put things in business context

      keep in mind why things are done the way they are - that's because of business context. If you understand your domain make sure it is captured as knowledge and then used within the IT solutions - this is where BPM practices come in handy. That's the whole point of having BPM in your toolbox - once the business knowledge (business processes, business rules, information entities) is collected it can be directly used for execution.

      make tailored UI

      Make use of any tools/frameworks/etc you have available to build state of the art tailored UI for the domain. Don't expect generic platforms to generate complete and comprehensive application for you - reason for that it most likely the platform does not know the domain so what it will provide might be limited. Though don't throw away that idea directly, this might be a good start for extending it to fit your needs. All depends on your domain and business context ... I know again :)

      Modern BPM


      To summarize, modern BPM is about using the tool in more modern way but still the same tool - process/rule engine. Although the tool needs to be capable of doing so - meaning it should be suitable for lightweight deployment architectures. That can easily scale and evolve but still provide value to the business in a matter of days on both months or years of IT projects. 

      Next article will give an example of such system to show what can be done in less than a week of time... stay tuned!

      Order IT hardware - jBPM 7 case application

      $
      0
      0
      In previous article I talked about Traditional vs Modern BPM and now I'd like to show what does that mean - a modern BPM in action. For this I used upcoming feature of jBPM 7 that provides case management capabilities that were already introduced in Case management series that can be found here.

      So this is another iteration around Order IT hardware case that allows employees to place requests for new IT hardware. There are three roles involved in this case:

      • owner - the employee who placed the order
      • manager - direct manager of the employee - the owner
      • supplier - available suppliers in the system
      It's quite simple case definition that looks like this:


      As presented above this process does not look much like a regular process - it's case definition so it's completely dynamic process (so called ad hoc). This means new activities can be added to it at any time or different fragments can be triggered as many times as needed. 

      What is worth notice here is Milestone nodes:
      • Order placed
      • Order shipped
      • Delivered to customer
      • Hardware spec ready
      • Manager decision
      Milestones are completed based on condition, in this case all conditions evaluate given case instance data - case file. So as soon as data is given the milestone is achieved. Milestones can be triggered manually by signal or can be auto started when the case instance starts.

      Here you can watch this application in action


      and now we dive into details of how this application was composed...

      So application is built with following components:

      • WildFly Swarm as runtime environment
      • KIE Server as backend 
      • PatternFly with AngularJS as front end
      This application is fully featured and runnable but should be seen as showcase/PoC that aims at showing intension with modern BPM and case applications. No more centralized deployments to serve all but instead have a tailored apps to do the one thing but do it right.


      So once you logon you will see the home screen - the purpose of this system - to order new hardware
      Here you can see all available suppliers that can deliver IT hardware:

      • Apple
      • Lenovo
      • Dell
      • Others (for anything that does not match above)
      Here the suppliers are considered groups in the BPM world - meaning tasks assigned to selected supplier will match the selection.  That will be then assigned to "Prepare hardware spec" task in the case definition.

      Once you fill in the form and place an order you'll be given with order receipt 



      At any given time you can take a look at orders

      • My orders - those that you (as logged in user) placed
      • All orders - lists all orders currently opened
      From the list you can go into details of particular order to see its status and more


      Order details page is build from three parts:

      • on left hand side you can see the progress of your order - matching all milestones in the case with their status - currently no progress at all :(
      • central part is for case details
        • hardware specification document that supplier is expected to deliver
        • comments section to discuss and comment on the order
        • My tasks, any tasks that are assigned to you (as logged in user) in scope of this order
      • right hand side is the people and groups involved in your order - so that can give you a quick link in case you'd like to get in touch

      So the first thing to be done here is up to selected supplier - (s)he should provide a document with hardware specification for the placed order

      Here is a list of tasks assigned to supplier (as logged in user) that (s)he can take and directly work on by providing hardware specification document


      Once document is uploaded, owner of the order can take a look at it via oder details page

      Now you can observe some progress on the order - hardware spec was delivered and is available to download in the central section. On the right side you can see the Hardware specification milestone is checked (and green) so it was completed successfully.

      Next it's up to manager to look at the order and approve it or reject it



      In case manager decided to reject it, the decision and reason will be available in the order details page.



      What is important to note here is, since manager rejected the order there is a new option available in the order to request the approval again. This is only available when order was rejected and can be used to change manager's decision.  This in turn will create dynamic task for the manager (as it does not exist in the case definition) and thus allow manager to change his/her decision.

      Entire progress of the order is always available in the order details page when all order data base be easily found.

      Once manager approved the order, the order is again handed over to supplier to place physical order for shipment.
      Then on order page you'll find additional action to mark when the order was shipped and later when it was delivered which is usually done by the order owner.






      Last but not least is customer satisfaction survey that is assigned to owner for evaluation.



      Owner of the order can then provide his/her feedback over the survey that will be kept in order data (case file).


      Order is not closed until it's explicitly closed from order details page... usually by the owner when (s)he feels it's completed, otherwise more activities can be still added to the order.

      Conclusion

      The idea for this article is how you can leverage modern BPM to build quickly business systems that bring value and still take advantage of the BPM just in slightly different way then traditional. This application was build in less than 4 days ... so any reasonable size application to demo capabilities should be doable in less than a week. That's the true power of the modern BPM!


      Feel like to try it yourself? Nothing easier then just do it. Just follow instructions and off you go!!!

      Share your feedback


      Distribute tasks wisely ... pluggable task assignments jBPM7

      $
      0
      0
      User interaction in business processes is one of the most important aspects to make sure that the job is done. But not only that, it should make sure that the job is done:

      • on time
      • by proper actors
      • in least time possible
      • and more...
      User tasks in business processes can be assigned either to:
      • user(s) - individuals that are known at the time of task creation - one or more
      • group(s) - groups/roles that are known at the time of task creation - one or more groups
      • users or groups references as process variables
      With this users are already equipped with quite few choices that allow to manage user tasks efficiently. So let's review simple scenario:

      Here is the simplest process as it can be - single user task.
      Such a task can be assigned (at design time - when process is created) to:
      • individuals via ActorId property - it supports comma separated list to specify multiple actors
      • groups via GroupId property - it supports comma separated list to specify multiple groups


      ... use single actor

      So if the task is assigned to single actor then when task is created it:
      • will be assigned directly to that actor - actual owner will be that actor
      • will be moved to Reserved state
      • no one else will be able to work on that task anymore - as it is in reserved state
      So that seems like a nice approach, but in reality it will constrain users too much because to change the actor you have to change the process definition. And in many cases there is a need for more users to be able to deal with certain tasks instead of just a single person.

      ... use multiple actors

      To solve that you can use approach with multiple actors (as this is supported to specify set of users as comma separated list). So what will happen then?
      • task will not be assigned to any individual as actual owner as there is no way to tell which one should be selected
      • task will be available to all actors defined in process definition on that task
      • task will be moved to Ready state
      • user to be able to work on task will have to explicitly claim the task
      So a bit of improvement but still relies heavily on individuals and manual claim process that can sometimes lead to inefficiency due to delays.

      ... use groups

      naturally to go around the problem with individuals being assigned (as they come and go) better option would be to assign tasks to groups. When task is assigned to group it:
      • will not be assigned to any individuals as by nature group contains many members
      • will be available to all actors that belong to defined group(s) - it's resolved on the time of the query as task is assigned to group so changes to the group do not have to be synced with tasks
      • will be moved to Ready state
      • user to be able to work on task will have to explicitly claim the task
      so this improves situation a bit but still have some issues ... manual claim and potential delays to pick tasks from the group "queue".

      ... use pluggable task assignment 

      for this exact purpose, jBPM 7 comes with pluggable task assignment support to let you (or to be precise system) to distribute tasks according to various criteria. The criteria here are what makes the difference, as different business domains will have different ways of assigning tasks. Even different departments within the same organization will differ in that regard. 

      So what is the task assignment here? In general this is the logic that will be used to find the best suitable candidate to take the task automatically. To carry on with the example, there is a process that is assigned to a singe group - HR. 
      Task assignment strategy will then be invoked when a task is created and strategy can find the best actual owner for it. If it does such a task:
      • will be assigned to selected actor
      • will be moved to Reserved state
      • no one else will be able to work on this task any more 
      but if the strategy won't be able to find any suitable candidate (should be rather rare case but still can happen) the task will fallback to default behavior as described above.

      Assignment strategy can be based on almost anything that is valuable to the business to make a fact based and efficient decision. That means some strategies can be based on:
      • potential owners (as in this example)
      • task data (input variables)
      • task properties (name, description, project, etc)
      • time when task was created
      • external data not related to task itself

      So that gives all the options to the users to build their own strategies based on the specific needs. But before going to the implementation (next article) let's look at...

      ... what comes out of the box

      jBPM 7 comes with two assignment strategies out of the box
      • Potential owner busyness strategy - default
      • Business rules strategy
      Potential owner busyness strategy
      this strategy simply makes sure that least loaded actors from potential owner list will be selected. Strategy will work on both types of potential owners - users and groups but to be able to effectively find best match it needs to resolve groups to users. Resolve is done by UserInfo configured in the environment - please make sure you have one properly configured otherwise strategy will not work in most efficient way.

      Name of the strategy to be used to activate:
      PotentialOwnerBusyness




      Business rules strategy
      This strategy promotes the use of business rules as a way of selecting actual owners. Strategy does not come with any predefined rules but instead expect to be given the KJAR coordinates of the project to be used to perform assignment. The most important factor here is that any rule can be used and it supports dynamic updates of the rules as well by making use of KIE Scanner that can incrementally update knowledge base upon new version of the KJAR being discovered.

      Name of the strategy to be used to activate:
      BusinessRule

      Configuration parameters supported:
      • org.jbpm.task.assignment.rules.releaseId
        • required parameter that points the GAV of the KJAR
      • org.jbpm.task.assignment.rules.scan
        • optional - pool interval for the scanner in case it should be enabled - same as KIE Scanner expects it - in milliseconds
      • org.jbpm.task.assignment.rules.query
        • optional - drools query to be used to retrieve results - if not given all Assignment objects are taken from working memory and first is selected if not empty


      ... not only on task creation

      task assignment is invoked not only on task creation (though that will be the most common case) but it will also get involved when:
      • task is released - here the actual owner who releases the task is excluded from the assignment
      • task nomination
      • task reassignment (deadlines)
      • task forwarding
      with that it should provide quite capable self assignment behavior when the strategy is tailored for given needs.

      ... how to use it

      Task assignment is disabled by default and can be easily enabled by specifying system property:
      -Dorg.jbpm.task.assignment.enabled=true

      then selecting strategy is done by another system property:
      -Dorg.jbpm.task.assignment.strategy=NAME OF THE STRATEGY

      if org.jbpm.task.assignment.strategy is not given PotentialOwnerBusyness strategy is used by default.

      To be able to properly resolve group members users need to select user info implementation by system property:

      -Dorg.jbpm.ht.userinfo=db

      this one will select data base as source of group members and thus will have to be configured additionally. KIE Server comes with example configuration file in
      kie-server.war/WEB-INF/classses/jbpm.user.info.properties

      where db queries should be specified how to find users of given group.

      Optionally you can create userinfo.properties file in the same directory and specify the group to users mapping in following format:

      #groups setup
      HR=hr@domain.com:en-UK:HR:[maciek,engUser,john]
      PM=pm@domain.com:en-UK:PM:[maciek]

      this is only for test purposes. For real environment use either data base or ldap based UserInfo implementation.

      That concludes the introduction of task assignment strategies that are completely pluggable. Next article will illustrate how to implement custom strategy.
      Viewing all 140 articles
      Browse latest View live