Quantcast
Channel: Accelerate your business
Viewing all 140 articles
Browse latest View live

jBPM 5.3 brings LDAP into the picture

$
0
0
jBPM engine itself does not require to have knowledge about users and groups but to build more complete platform based on it sooner or later users and their memberships will be needed. Prior to version 5.3, jBPM relied on basic (and demo only) setup based on some property files. With 5.3 you can make use of integration that allow to employ existing LDAP server.

So, let's start digging into it :)

User and group information are relevant to two components:
  • jbpm console
  • human task server
jbpm console need to know about users and their roles to properly authenticate and grant them with correct access in the console.
Human task server has bit more to do with users and groups. First of all it requires it when assigning task to entities (either user or group). Next, in case notification mechanism is configured it requires to fetch more information about the entity (such as email address, etc.)

Bit of heads up is always welcome, but how this can be configured and used? Firstly, we need to have a LDAP server that could be used as user repository. If you have already one this step can be omitted.

Install and configure LDAP server


Install and configure LDAP server of your choice, I personally use OpenLDAP that can be downloaded and used freely for evaluation purpose. Regardless of your choice most likely the best guide on installation and configuration is to be found on the server home page.

Note: make sure that when configuring your LDAP server inetOrgPerson schema is included otherwise import of example ldif will fail.

Once the server is up and running it's time to load it with some information that will be used later on by jBPM. Following is a sample setup of users and groups that matches the one used in previous versions of jBPM5 (Example ldif file expect that there is already domain configured (dc=jbpm,dc=org)):

Sample LDIF file

Rest of the post assumes that LDAP server is installed on the same machine as application server that hosts jbpm (but it is not limited to that).

Configure JBoss AS7 security domain to use LDAP


Next step is to configure application server that host jbpm console to authenticate users using LDAP instead of the default property file based security domain. In fact there are no changes needed to jbpm console itself but only to JBoss AS7 configuration.
To use LDAP, jbpm-console security domain inside standalone.xml file should be replaced with:
 <security-domain name="jbpm-console">
      <authentication>
         <login-module code="org.jboss.security.auth.spi.LdapExtLoginModule" flag="required">
             <module-option name="baseCtxDN" value="ou=People,dc=jbpm,dc=org"/>
             <module-option name="baseFilter" value="(uid={0})"/>
             <module-option name="rolesCtxDN" value="ou=Roles,dc=jbpm,dc=org"/>
             <module-option name="roleFilter" value="(member=uid={0},ou=People,dc=jbpm,dc=org)"/>
             <module-option name="roleAttributeID" value="cn"/>
             <module-option name="allowEmptyPasswords" value="true"/>
         </login-module>
     </authentication>
 </security-domain>
This is pure JBoss configuration so in case of more advanced setup is required please visit JBoss AS7 documentation.

Note that from jBPM 5.3 jbpm console is capable of using any security domain that JBoss AS supports with just configuring it properly on application server so it is not limited to LDAP.

Configure Human task server to use LDAP


Finally it is time to look into the details of human task server configuration to make it use of LDAP as user repository. Those that are following jBPM development are already familiar with UserGroupCallback interface that is responsible for providing user and group/role information to the task server. So naturally LDAP integration is done through implementation of that interface.
Similar to configuring application server, ldap callback needs to be configured (with property file for instance). Here is a sample file that corresponds to the configuration used throughout the post:

ldap.user.ctx=ou\=People,dc\=jbpm,dc\=org
ldap.role.ctx=ou\=Roles,dc\=jbpm,dc\=org
ldap.user.roles.ctx=ou\=Roles,dc\=jbpm,dc\=org
ldap.user.filter=(uid\={0})
ldap.role.filter=(cn\={0})
ldap.user.roles.filter=(member\={0})

As from 5.3 by default human task server is deployed as web application on Jboss AS7. With this, user can simply adjust configuration of the human task server by editing its web.xml file. And this is how LDAP callback is registered.

<init-param>
     <param-name>user.group.callback.class</param-name>
     <param-value>org.jbpm.task.service.LDAPUserGroupCallbackImpl</param-value>
</init-param>

Next put the jbpm.usergroup.callback.properties on the root of the classpath inside jbpm-human-task.war web application and your LDAP callback will be ready to rock!

In addition, when using deadlines on your tasks together with notification, there is one more step to configure so user information (for instance email address) could be retrieved from LDAP server. UserInfo interface is dedicated to providing this sort of information to the deadline handler and thus it's implementation needs to registered as well.
Similar as user group callback was registered it can be done via web.xml of human task web application:

<init-param>
     <param-name>user.info.class</param-name>
     <param-value>org.jbpm.task.service.LDAPUserInfoImpl</param-value>
</init-param>

It requires to be configured as well and it could be done via property file that should be named jbpm.user.info.properties and be placed on root of the class path.
As it shares most of the properties with callback configuration, in many cases users could use single file that contains all required values and instruct both implementation where to find this file with system properties:

-Djbpm.user.info.properties=classpath-location-and-file-name
-Djbpm.usergroup.callback.properties=classpath-location-and-file-name

With all that jBPM will now utilize your LDAP server for users and groups information whenever it needs them.

P.S.
This post is more of introduction to the LDAP integration rather than complete and comprehensive guide as it touches on high level all the components involved. More detailed information about configuring particular pieces can be found in jBPM documentation for 5.3 release.

Any comments are more than welcome.

Self managable user tasks - notification and reassignment

$
0
0
As a continuation of the first post, let's try to make use of LDAP configuration to make actors aware of the tasks awaiting their attention.
Human task service is capable of reacting on certain events such as task was not started or task was not completed in time. Currently there are two options a process designer can choose from:
  • remind the user about the task
  • reassign the task to another actor/group
Depending on the needs either one of them or both can be configured on a user task activity, moreover designers are not limited to single instances of those events, for instance user task can be modelled as follows:
  1. as soon as task is created send notification to the actor assigned to it
  2. if there is no action within a day, send a reminder to the actor
  3. if there is no action within two days, reassign the task to another actor
  4. if the task is not completed within a week send notification to a manager
This is illustrated on following screen cast, starting at designing a process with user task, configuring deadlines (reassignment and notifications), building and running the process.

1. first let's design the process and define deadlines


2. now it is time to build the package and execute process


Task service supports four types of events:
  • reassign if not started
  • reassign if not completed
  • notify if not started
  • notify if not completed
To give some flexibility, expression that reference process and task variables are supported, for instance when modelling notification, users and/or groups can point to a process variables to get a list of users/groups that should be notified. In addition, notification subject and body can reference both process and task variables and provides additional variables that could be useful when referring to a task:
  • processInstanceId
  • processSessionId
  • workItemId
  • taskId
  • owners
 and all task variables are available as Map in
  • doc
Process variables are accessed with #{variable} and task variables are accessed with ${variable}, simple notification could look like this:

Hello,

A task with id ${taskId} was assigned to you. You can access it in your personal <a href="http://localhost:8080/jbpm-console/app.html#errai_ToolSet_Tasks;Group_Tasks.3">inbox</a>.

Regards
jBPM

Although HTML notifications are supported it is recommended to make use of process variables and some services to provide email templates instead of putting them inline within process definition, the bpmn2 file. Both subject and body of the notification can be declared as process variable.

but how to configure it?

Note: This guide assumes that human task service is deployed as web application, if that is not the case please refer to online documentation.

There are two elements that needs to be provided to make human task capable of performing deadline actions:
  1. configure user info component that delivers information required by notification mechanism
  2. configure email service
And of course declare deadline requirements on the user task in your process.

Previous post showed how to configure LDAP user info component to utilize external service as user information provider (Quick recap - it simple requires to put the right class name in web.xml init param section - user.info.class and LDAP configuration property file). As this is rather common to use LDAP in such cases there is a way to add custom implementations as well so you are not limited to that.

Configuring email service is done by providing drools.email.conf property file that contains smtp configuration and place it inside META-INF directory of jbpm-human-task-war/WEB-INF/classes.

from = sender-email-address (required)
replyTo = reply-to-email-address (optional)
host = smtp-host-name (required)
port = smtp-port-number (required)
defaultLanguage = en-UK
user=username-if-authentication-enabled
password=password-if-authentication-enabled

NOTE: there could be a need of prefixing file name with additional drools. to be found properly due to bug. So file name should be drools.drools.email.conf; this will be fixed in 5.4.

With this, many uses cases can be covered as shown in the example but most likely not all, so if there are any scenarios you find yourself in and think it might be useful to others please let us know about it:)

JUDCon2012 - Boston

$
0
0
JUDCon 2012 in Boston was a great place to meet with fellow developers and discuss about future of jBPM. I had a pleasure to give a talk about experimental project that I have been working on for quite some time, but unfortunately not enough time was spent on it. Anyway, I gave a introduction to the project that I use to call jbpm enterprise.



The goal of this project is to provide comprehensive BPM platform on top of jBPM and Drools projects to leverage the most of them in the enterprise (Java EE) environment. So what it does?
  • First of all it bundles jbpm and drools (together with all dependencies) into JBoss Module. 
  • Next it provides very tiny layer that abstracts knowledge api and provides simplified interfaces to interact with execution engine (execution engine is knowledge base with sessions enclosed by an component with additional characteristics - more in presentation)
  • exposes some services via OSGi service repository
  • provides maven archetypes to build your components that utilize the platform over OSGi service registry
    • bundle archetype that is dedicated to put you process/rules/events logic in
    • web application archetype that makes use of the platform (starts, signal processes, etc)

More details can be found in the presentation from the JUDCon2012 conference.

If some one would like to give it a test drive please find a downloadable artefacts and short guide here.

Have fun and as usual comments are welcome.

In this case your feedback is even more important to see if that is something that community expects.

Service Task with web service implementation

$
0
0
Invocation of web services as part of business process is common and most likely because of that default implementation of Service Task in BPMN2 specification is web service. Recently (5.4) jBPM5 has gotten support for such activity.

jBPM5 web service support is based on Apache CXF dynamic client. It provides dedicated Service Task handler (that implements WorkItemHandler interface).

org.jbpm.process.workitem.bpmn2.ServiceTaskHandler

Worth noting is that this handler is capable of invoking both web service end points and simple Java based services as with previous ServiceTask handler (org.jbpm.bpmn2.handler.SendTaskHandler) based on the implementation attribute of service task node

web service implementation
 <bpmn2:serviceTask id="ServiceTask_1" name="Service Task" implementation="##WebService" operationRef="_2_ServiceOperation"> </bpmn2:serviceTask>

java implementation
<bpmn2:serviceTask id="_2" name="Hello" operationRef="_2_ServiceOperation" implementation="Other">
</bpmn2:serviceTask>

ServiceTaskHandler can invoke web service operations in three modes:
  • synchronous (sends request and waits for response before continuing)
  • asynchronous (sends request and uses callback to get response)
  • one way (sends request and does not wait for any response)
This configuration is done on service node level as parameter (data input) so it allows to have different configuration for service nodes and to be handled by the same service task handler.

Let's try to go through this implementation with example. We are going to build a process that will get weather forecast for given zip codes in US. So process will look like this:

This process will:
  • ask for couple of zip codes on the first human task (task is assigned to john)
  • next transform result of the user task to a collection that will be used as input for multi instance service task
  • then based on the input collection process will create several instances of service task to query the weather forecast service
  • once all service task instances are completed it will log result to the console
  • and create another human task to show the weather forecast for selected zip codes (task is assigned to john)
When process instance is started it will prompt the user to select in what mode service task instances should be executed: async or sync. With this particular example changing the mode from async to sync will not make big difference as the service we use is rather fast but with services that takes some time to respond difference will be noticeable.

But how does it know it is web service and even more important what web service it is? This is configured as part of process definition using few dedicated constructs:

1. first of all we need to tell the engine where is our WSDL so it can be read and operations from it can be invoked - this is done with BPMN2 import:
 <import importType="http://schemas.xmlsoap.org/wsdl/" location="http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL" namespace="http://ws.cdyne.com/WeatherWS/"/>

2. next message, interface and operations must be defined:
<itemDefinition id="_2-2-4_InMessageType" />
<message id="_2-2-4_InMessage" itemRef="_2-2-4_InMessageType" />

<interface id="_2-2-4_ServiceInterface" name="" implementationRef="Weather">
 
  <operation id="_2-2-4_ServiceOperation"   
       implementationRef="GetCityWeatherByZIP" name="hello">
      <inMessageRef>_2-2-4_InMessage</inMessageRef>
  </operation>
</interface>

Important: make sure that implementationRef for both interface and operations point to valid service and operation in WSDL.

3. Next use defined operation in your service task and set implementation as web service (or don't specify that attribute at all so default will be taken):
<serviceTask id="_2" name="Service Task" operationRef="_2-2-4_ServiceOperation" implementation="##WebService">
........
</serviceTask>

NOTE: Unfortunately tooling does not support this yet so the bpmn2 file needs to be edited by hand. Soon tooling will provide this as well.

Yet another important thing here is that if you plan to use request or response object of the service in your process as variables make sure that all of them implement java.io.Serializable interface so they can be properly persisted. One way to do this (used in the example) is to provide additional configuration to tell JAXB to add Serializable while generating classes from WSDL and generate classes as part of the build:

Complete source code can be found here. It comes with test cases that uses this example as well as local service that can be used to illustrate difference between sync and async modes.

This example can be executed on jbpm-console when build from master, as it already has this service task handler configured. Here is a short guide on how to do it:
1. clone jbpm master and build it with mvn clean install -DskipTests -Dfull (alternatively download latest build from build server)
2. clone jbpm-examples and build jbpm-ws-example project with mvn clean install
3. copy result of build in point 2 (jbpm-ws-sample-1.0-SNAPSHOT.jar) into jbpm-installer/dependencies
4. copy WeatherWSServiceProcess.bpmn2 into jbpm-installer/sample/evaluation/src/main/resource
5. copy all archives from jbpm-distribution into jbpm-installer/lib
6. use jbpm-installer to install jbpm into jboss AS7 - ant install.demo.noeclipse
7. start demo server - ant start.demo.noeclipse

then go to jbpm-console and run the process with name WeatherWSServiceProcess.

Have fun!

Simulation in jBPM (draft)

$
0
0
Recently some work has been started to provide simulation capabilities for jBPM. Simulation in many cases means different things to various people so let me start with context information on what simulation means to me and actually what current simulation component is capable of.

Simulation of business process is targeting business analysts that work on designing processes and optimizing them this is not a developer tool. It brings analitics into the picutre so while modelling the process various scenarios could be evaluated to see what is the best option based on current knowledge. In my eyes it is a way of learing the process and better understanding its design and prepare for the consequences a particular process can introduce. For example if we consider goods return process that will take care of products that were bought but for some reason was returned to the store. There are several steps that needs to be performed to analyze what is the reason of return:
  • it is broken
  • it does not meet customer expectations
  • etc
Depending on the reason various decisions could be taken, from rejecting the return, through sending that for further analysis to verify it is broken up to accepting it and returning funds. Following diagram illustrates sample of such process

We can imagine that this process is prepared for heavy sale period which could be Xmas time. Analyst considers that there could be quite some returns due to it was a duplicated/missed gift and is wondering how to prepare its company to deal with it efficiently. With simulation and just few additional information (input data) provided as some sort of forecast on the expected load and available resources (s)he could identify potential bottlenecks in the organization that will prevent it from gathering profit (working efficiently). To name few of these information:
  • probability of taking a given path on the gateway
  • time spent on executing given activity
  • how many people are available to work on user tasks

Once such information are provided a simulated run through this process is executed and result is gathered and presented to the analyst for inspection. Number of runs can be executed with various input data to exercise "what if" scenarios. Alternative sources of information can exist, for instance analyst can make use of real time data collected by business activity monitoring for processes that are already running on production systems.

With this short introduction we can move on to look into how this is realized by jbpm simulation.

First of all jbpm simulation is divided into two components:
  • path finder
  • simulation engine extension

path finder component is responsible for determining all alternative paths in the process to illustrate how a process can be traversed. This is not only informational but as well input for running simulation. Following image shows the sample process with identified alternative paths and one is visualized on the diagram.


simulation engine extension is (as name suggests) an extension to jbpm engine that allows you to run simulations instead of normal process instance executions. Instead of relying on process data like variable it will traverse the process based on identified paths. So that means that path finder component is responsible for providing input to the simulation engine extension component. This at least is the main use case,it could be used as well to alter the path flow in cases of debugging the simulation.

Simulation engine extension provides core of the simulation dedicated to processes but does not run the simulation itself. For that drools- simulator (some details about it can be found here and here) is employed together with its fluent api that is based on paths and steps that can be positioned in time.

A typical use case would look like this:
  • model process definition
  • determine all alternative paths
  • each alternative path will be a path in drools simulatoin fluent
  • define steps for the path (there could be several steps configured for a drools simulator path that in fact represents simulation instance) on a given time distance
  • add SimulateProcessPathCommand for each step
  • run simulation
To see a running example of such simulation take a look at test case that is part of the jbpm-simulation.

Simulation engine extension while executing simulation will generate events for every simulated activity those events will be stored in simulation repository that could have various capabilities. Personally I prefer one that is backed up with stateful knowledge session and can employ complex event processing  and rules to provide meaningful simulation results.

This is just short heads up on the simulation efforts in jbpm so please leave your comments on what would you like to see supported by this component.

Further details about jbpm-simulation components can be found in jbpm wiki (soon).

jBPM5 Developer Guide book is on its way!!!

$
0
0
For those that keep an eye on jBPM user forum it won't be any surprise as it was already announced and sometime ago even request for comments was posted. So, I would like to emphasise once again - a brand new book for developers about jBPM5 and Drools by Mauricio Salatino a.k.a. Salaboy is coming.



I had a pleasure to slightly help with this book by reviewing it. I admit this is an excellent reading for those that are new to jBPM5 (and Drools) world as well as to those that are already familiar with it and are planning to put it into mission critical systems. Mauricio provides not only guidelines about the engine but goes beyond that by elaborating about different approaches on how to best utilize these frameworks.

It is much more than title would suggest, it is not only about jBPM5 (but main focus is on the business processes) but it covers topics like:
  • BPMN2 specification
  • Human tasks and WS-HT specification
  • Rule engine (Drools Expert)
  • Complex Event Processing (Drools Fusion)
  • Tooling (jBPM Web Designer, eclipse modeller, etc)
  • Centrailzed repository (Drools Guvnor)
Book is equipped with number of real life examples including source code that allows reader to follow all the content put in every chapter.

In conclusion, this is a book that everyone who wants to use jBPM5 to its full extend should get. I definitely recommend it for  everyone who is interested in jBPM and Drools projects as it provides excellent introduction and much more than that.

Book is scheduled to be available in December 2012 so stay tuned.

Credits
Special thanks go to Maurico Salatino and Esteban Aliverti for this great book. Looking forward to the next one :)

Dispose session in CMT environment

$
0
0
Since jBPM5 is flexible process engine it can be deployed in various flavours. One of them is to embed it into you enterprise application running on application server, regardless of its vendor (JBoss, WebSphere, WebLogic, etc).
One option among many others is to make use of it as part of you business logic implemented as EJB. If you choose to use bean managed transaction (BMT) you do not need to take any additional steps as your business logic maintains transaction boundaries. Although when you use container managed transaction (CMT) situation is little bit different as it is container (application server) responsibility to manage transaction.

Before we jump into details what needs to be done for CMT based application, let's mention one important and common for both types (BMT and CMT) practice:

Session must be disposed outside transaction, meaning transaction must be committed/rolledback before session could be disposed.

Obviously this applies to situation when session should be disposed as part of business logic, for instance with session per process instance architecture this could be desired. But not when we have single centralized session.

If session will be disposed before transaction is completed, exception will be thrown on transaction completion, complaining that session is already disposed:

IllegalStateException
("Illegal method call. This session was previously disposed.")

Having this in mind, let's take a look at how this can be done in CMT based implementations. Since we do not control transaction how we could dispose session after transaction is completed?
A simple answer to this is to use dedicated Command that will register transaction synchronization object that will be called on transaction completion so we could safely dispose session.

Here is an example of such command's execute method:
    
public Void execute(Context context) {
       
  final StatefulKnowledgeSession ksession =     

                ((KnowledgeCommandContext)context).getStatefulKnowledgesession();
  try {
      TransactionManager tm = 

      (TransactionManager) new InitialContext().lookup(tmLookupName);
      tm.getTransaction().registerSynchronization(new Synchronization() {
               
          @Override
          public void beforeCompletion() {
              // not used here         
          }
               
          @Override
          public void afterCompletion(int arg0) {
              ksession.dispose();           
          }
      });
  } catch (Exception e) {
      e.printStackTrace();
  }  
       
    return null

}

so, instead of calling default ksession.dispose() method at the end of you business logic, simply call ksession.execute(new CMTDisposeCommand());
That will ensure session is disposed as part of transaction completion.

Here is complete CMT dispose command

jBPM web designer runs on VFS

$
0
0
As part of efforts for jBPM and Drools version 6.0 web designer is going through quite few enhancements too. One of the major features is to provide flexible mechanism to persist modeled processes (and other assets that relate to them such as forms, process image, etc) a.k.a assets even without being embedded in Drools Guvnor.
So let's start with the main part here - what does it mean flexible mechanism to persists assets? To answers this let's look at what is currently (jBPM 5.x) available:

  • designer by default runs in embedded mode inside Drools Guvnor
  • designer stores all assets inside Drools Guvnor JCR repository
  • designer can run in standalone mode but only as modeling tool without capabilities to store assets
So as listed above there is only one option to persist assets - inside Drools Guvnor. In most of the cases this is good enough or even desired but there are quite few situation where modeling capabilities are required to be delivered with the custom application and including complete Drools Guvnor could be too much. 
That leads us to the flexible mechanism implemented - designer was equipped with Repository interface that is considered entry point to interact with underlying storage. Designer by default comes with Virtual File System based repository that provides:
  • default implementation that supports 
    • simple (local) file system repository
    • git based repository
  • allows for pluggable VFS provider implementations
  • is based on standards - java NIO2
Extensions to what is delivered out of the box can be done in one of the following ways:
  1. if VFS based repository is not what user needs an alternative implementation of the Repository interface can be provided, e.g data base
  2. if VFS is what user is looking for but neither local file system nor git is the right implementation additional providers can be developed
Let's look little bit deeper into what these new features are and how users will benefit from them. 

1. It's based on Java 7 NIO2


VFS support provides is based on Java SE 7 NIO2 but does not require Java 7 to run as it comes with backport implementation of selected parts of NIO2 that are required

2. Different providers for Virtual File System


The simplest option is to use designer with local file system storage that will simply utilize file system on which designer is running. As most likely it will provide the best performance it leaves user with rather limited options when it comes to clustering, distributed environments or backups.

Next option that personally would recommend is to utilize GIT as underlying storage. People that works with GIT in their software development projects will most likely notice quite few advantages as in the end process definitions are more like source code that can be versioned, developed in parallel (branching) and included in some sort of release cycle.

3. Save process directly in designer editor



Designer now allows users to save process directly from the editor which will store the svg process content as well with just one click!




4. Repository menu


With the repository designer provides a simple UI menu to navigate through the repository and perform basic operations such as:

  • open processes in designer editor
  • create assets and directories
  • copy/move assets
  • delete assets and directories
  • preview files

This menu is intended to be seen as basic file system browser utilized more in a standalone mode as when integrated with jBPM console-ng, guvnor-ng (UberFire) more advanced options will be delivered in this area.

5. Simpler integration with jBPM console and Drools Guvnor


Both jBPM console and Drools Guvnor are going to be "refreshed" for version 6.0 and thus integration between these components and designer will be simplified as they all will be unified on the repository level, meaning single repository can be shared across all these three components.



That will be all for a brief introduction but certainly not all in this topic. Expect more to come as soon as preview will be released on how to configure different repositories and more updates on git based repository on how to make best of it.

Your comments are more than welcome as they can help to make designer best modeling tool out there :)

Known limitation

Currently git based repository does not support move of assets and directories as atomic operation which means that preferred is to copy first and then delete.



Clustering in jBPM v6

$
0
0

Clustering in jBPM v5 was not an easy task, there where several known issues that had to be resolved on client (project that was implementing solution with jBPM) side, to name few:

  • session management - when to load/dispose knowledge session
  • timer management - required to keep knowledge session active to fire timers
This is not the case any more in version 6 where several improvements made their place in code base, for example new module that is responsible for complete session management was introduced - jbpm runtime manager. More on runtime manager in next post, this one focuses on how clustered solution might look like. First of all lets start with all the important pieces jbpm environment consists of:


  1. asset repository - VFS based repository backed with GIT - this is where all the assets are stored during authoring phase
  2. jbpm server that includes JBoss AS7 with deployed jbpm console (bpm focused web application) of kie-wb (fully features web application that combines BPM and BRM worlds)
  3. data base - backend where all the state date is kept (process instances, ksessions, history log, etc)

Repository clustering

Asset repository is GIT backed virtual file system (VFS) that keeps all the assets (process definitions, rules, data model, forms, etc) is reliable and efficient way. Anyone who used to work with GIT understands perfectly how good it is for source management and what else assets are if not source code?
So if that is file system it resides on the same machine as the server that uses it, that enforces it to be kept in sync between all servers of a cluster. For that jbpm makes use of two well know open source projects:

Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).

So this two components are utilized by the runtime environment on which jbpm v6 is based on:
  • kie-commons - provides VFS implementation and clustering 
  • uber fire framework - provides backbone of the web applications

So let's take a look at what we need to do to setup cluster of our VFS:

Get the software

  • download Apache Zookeeper (note that 3.3.4 and 3.3.5 are the only versions that were currently tested so make sure you get the correct version)
  • download Apache Helix  (note that version that was tested was 0.6.1)

Install and configure

  • unzip Apache Zookeeper into desired location - ( from now one we refer to it as zookeeper_home)
  • go to zookeeper_home/conf and make a copy of zoo_sample.conf to zoo.conf
  • edit zoo.conf and adjust settings if needed, these two are important in most of the cases:
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181

  •  unzip Apache helix to into desired location (from now one we refer to it as helix_home)

Setup cluster

Now we have all the software available locally so next step is to configure the cluster itself. We start with start of the Zookeeper server that will be master of the configuration of the cluster:
  • go to zookeeper_home/bin
  • execute following command to start zookeeper server:
sudo ./zkServer.sh start
  • zookeeper server should be started, if the server fails to start make sure that the data directory defined in zoo.conf file exists and is accessible
  • all zookeeper activities can be viewed zookeeper_home/bin/zookeeper.out
To do so, Apache Helix provides utility scripts that can be found in helix_home/bin.

  • go to helix_home/bin
  • create cluster
./helix-admin.sh --zkSvr localhost:2181 --addCluster jbpm-cluster
  • add nodes to the cluster 
node 1
./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeOne:12345
node2
    ./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeTwo:12346
add as many nodes as you will have cluster members of jBPM server (in most cases number of application servers in the cluster)
NOTE: nodeOne:12345 is the unique identifier of the node, that will be referenced later on when configuring application severs, although it looks like host and port number it is use to identify uniquely logical node.
  • add resources to the cluster
./helix-admin.sh --zkSvr localhost:2181 
           --addResource jbpm-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  • rebalance cluster to initialize it
./helix-admin.sh --zkSvr localhost:2181 --rebalance jbpm-cluster vfs-repo 2

  • start the Helix controller to manage the cluster
./run-helix-controller.sh --zkSvr localhost:2181 
                        --cluster jbpm-cluster 2>&1 > /tmp/controller.log &
Values given above are just examples and can be changed according to the needs:
cluster name: jbpm-cluster
node name: nodeOne:12345, nodeTwo:12346
resource name: vfs-repo
zkSvr value must match Zookeeper server that is used.

Prepare data base 


Before we start with application server configuration data base needs to be prepared, for this example we use PostgreSQL data base. jBPM server will create all required tables itself by default so there is no big work required for this but some simple tasks must be done before starting the server configuration.

Create data base user and data base

First of all PostgreSQL data base needs to be installed, next user needs to be created on the data base that will own the jbpm schema, in this example we use:
user name: jbpm
password: jbpm

Once the user is ready, data base can be created, and again for the example jbpm is chosen for the data base name.

NOTE: this information (username, password, data base name) will be used later on in application server configuration.

Create Quartz tables

Lastly Quartz related tables must be created, to do so best is to utilize the data base scripts provided with Quartz distribution, jbpm uses Quartz 1.8.5. DB scripts are usually located under QUARTZ_HOME/docs/dbTables.

Create quartz definition file 

Quartz configuration that will be used by the jbpm server needs to accomodate the needs of the environment, as this guide is about to show the basic setup obviously it will not cover all the needs but will allow for further improvements.

Here is a sample configuration used in this setup:
#============================================================================
# Configure Main Scheduler Properties  
#============================================================================

org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO

#============================================================================
# Configure ThreadPool  
#============================================================================

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5

#============================================================================
# Configure JobStore  
#============================================================================

org.quartz.jobStore.misfireThreshold = 60000

org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=managedDS
org.quartz.jobStore.nonManagedTXDataSource=notManagedDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval = 20000

#============================================================================
# Configure Datasources  
#============================================================================
org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psjbpmDS
org.quartz.dataSource.notManagedDS.jndiURL=jboss/datasources/quartzNotManagedDS


Configure JBoss AS 7 domain


1. Create JDBC driver module - for this example PostgreSQL
a) go to JBOSS_HOME/modules directory (on EAP JBOSS_HOME/modules/system/layers/base)
b) create module folder org/postgresql/main
c) copy postgresql driver jar into the module folder (org/postgresql/main) as postgresql-jdbc.jar          name
d) create module.xml file inside module folder (org/postgresql/main) with following content:
        <module xmlns="urn:jboss:module:1.0" name="org.postgresql">
        <resources>
          <resource-root path="postgresql-jdbc.jar"/>
        </resources>

             <dependencies>
      <module name="javax.api"/>
      <module name="javax.transaction.api"/>
         </dependencies>
</module>

2. Configure data sources for jbpm server
a) go to JBOSS_HOME/domain/configuration
b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two 
        server nodes as part of main-server-group
c) locate the profile "full" inside the domain.xml file and add new data sources
main data source used by jbpm
   <datasource jndi-name="java:jboss/datasources/psjbpmDS" 
                pool-name="postgresDS" enabled="true" use-java-context="true">
       <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
       <driver>postgres</driver>
        <security>
           <user-name>jbpm</user-name>
                <password>jbpm</password>
        </security>
   </datasource>
        
        additional data source for quartz (non managed pool)
        <datasource jta="false" jndi-name="java:jboss/datasources/quartzNotManagedDS"   
           pool-name="quartzNotManagedDS" enabled="true" use-java-context="true">
      <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
        <user-name>jbpm</user-name>
                <password>jbpm</password>
        </security>
    </datasource>
defined the driver used for the data sources
<driver name="postgres" module="org.postgresql">
      <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
    </driver>
        
3. Configure security domain 
     a) go to JBOSS_HOME/domain/configuration
     b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two
        server nodes as part of main-server-group
      
     c) locate the profile "full" inside the domain.xml file and add new security domain to define security 
         domain for jbpm-console (or kie-wb) - this is just a copy of the "other" security domain defined 
         there by default
    
<security-domain name="jbpm-console-ng" cache-type="default"><authentication>
            <login-module code="Remoting" flag="optional">
              <module-option name="password-stacking" value="useFirstPass"/>
            </login-module>
            <login-module code="RealmDirect" flag="required">
                  <module-option name="password-stacking" value="useFirstPass"/>
            </login-module>
        </authentication>
    </security-domain>  
        
for kie-wb application, simply replace jbpm-console-ng with kie-ide as name of the security domain.  
        
4. Configure server nodes

    a) go to JBOSS_HOME/domain/configuration
    b) edit host.xml file
    c) locate servers that belongs to "main-server-group" in host.xml file and add following system  
        properties:
    


property nameproperty valuecomments
org.kie.nio.git.dir/home/jbpm/node[N]/repolocation where the VFS asset repository will be stored for the node[N]
org.quartz.properties/jbpm/quartz-definition.propertiesabsolute file path to the quartz definition properties
jboss.node.namenodeOneunique node name within cluster (nodeOne, nodeTwo, etc)
org.uberfire.cluster.idjbpm-clustername of the helix cluster
org.uberfire.cluster.zklocalhost:2181location of the zookeeper server
org.uberfire.cluster.local.idnodeOne_12345unique id of the helix cluster node, note that ':' is replaced with '_'
org.uberfire.cluster.vfs.lockvfs-reponame of the resource defined on helix cluster
org.kie.nio.git.deamon.port9418port used byt the VFS repo to accept client connections, must be unique for each cluster member
org.kie.kieora.index.dir/home/jbpm/node[N]/indexlocation where index for search will be created (maintained by Apache Lucene)
    
examples for the two nodes:
    
  •     nodeOne
<system-properties>
  <property name="org.kie.nio.git.dir" value="/tmp/jbpm/nodeone" 
                                       boot-time="false"/>
  <property name="org.quartz.properties" 
      value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
  <property name="jboss.node.name" value="nodeOne" boot-time="false"/>
  <property name="org.uberfire.cluster.id" value="jbpm-cluster" 
                                           boot-time="false"/>
    <property name="org.uberfire.cluster.zk" value="localhost:2181" 
                                           boot-time="false"/>
  <property name="org.uberfire.cluster.local.id" value="nodeOne_12345" 
                                                 boot-time="false"/>
  <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" 
                                                 boot-time="false"/>
  <property name="org.kie.nio.git.deamon.port" value="9418" boot-time="false"/>
  <property name="org.kie.kieora.index.dir" value="/tmp/jbpm/nodeone" boot-time="false"/>
</system-properties>
    
  •     nodeTwo
<system-properties>
    <property name="org.kie.nio.git.dir" value="/tmp/jbpm/nodetwo" 
                                         boot-time="false"/>
    <property name="org.quartz.properties" 
       value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    <property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
    <property name="org.uberfire.cluster.id" value="jbpm-cluster" 
                                             boot-time="false"/>
    <property name="org.uberfire.cluster.zk" value="localhost:2181" 
                                             boot-time="false"/>
    <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346" 
                                                   boot-time="false"/>
    <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" 
                                                   boot-time="false"/>
    <property name="org.kie.nio.git.deamon.port" value="9419" boot-time="false"/>
    <property name="org.kie.kieora.index.dir" value="/tmp/jbpm/nodetwo" boot-
     time="false"/>
</system-properties>
5. Create user(s) and assign it to proper roles on application server

Add application users
In previous step security domain has been created so jbpm console (or kie-wb) users could be authenticated while logging on. Now it's time to add some users to be able to logon to the application once it's deployed. To do so:
 a) go to JBOSS_HOME/bin
 b) execute ./add-user.sh script and follow the instructions on the screen
 - use Application realm not management
 - when asked for roles, make sure you assign at least:
 for jbpm-console: jbpm-console-user
 for kie-wb: kie-user
 
add as many users you need, same goes for roles, listed above are required to be authorized to use the web application. 

Add management (of application server) user
To be able to manage the application server as domain, we need to add administrator user, it's similar to what was defined for adding application users but the realm needs to be management
 a) go to JBOSS_HOME/bin
 b) execute ./add-user.sh script and follow the instructions on the screen
 - use Management realm not application

Application server should be now ready to be used, so let's start the domain:

JBOSS_HOME/bin/domain.sh

after few seconds (it's still empty servers) you should be able to access both server nodes on following locations:
administration console: http://localhost:9990/console

the port offset is configurable in host.xml for given server.


Deploy application - jBPM console (or kie-wb)

Now it's time to prepare and deploy application, either jbpm-console or kie-wb. As by default both application comes with predefined persistence that uses ExampleDS from AS7 and H2 data base there is a need to alter this configuration to use PostgreSQL data base instead.

Required changes in persistence.xml

  • change jta-data-source name to match one defined on application server
             java:jboss/datasources/psjbpmDS
  • change hibernate dialect to be postgresql 
             org.hibernate.dialect.PostgreSQLDialect

Application build from source

If the application is built from source then you need to edit persistence.xml file that is located under:
jbpm-console-ng/jbpm-console-ng-distribution-wars/src/main/jbossas7/WEB-INF/classes/META-INF/
next rebuild the jbpm-distribution-wars module to prepare deployable package - once that is named: 
jbpm-console-ng-jboss-as7.0.war

    Deployable package downloaded

    In case you have deployable package downloaded (which is already a war file) you need to extract it change the persistence.xml located under:
    WEB-INF/classes/META-INF
    once the file is edited and contains correct values to work properly with PostgreSQL data base application needs to be repackaged:
    NOTE: before repackaging make use that previous war is not in the same directory otherwise it will be packaged into new war too.

    jar -cfm jbpm-console-ng.war META-INF/MANIFEST.MF *

    IMPORTANT: make sure that you include the same manifest file that was in original war file as it contains valuable entires.


    To deploy application logon as management user into administration console of the domain and add new deployments using Runtime view of console. Once the deployment is added to the domain, assign it to the right server group - in this example we used main-server-group it will be default enable this deployment on all servers within that group - meaning deploy it on the servers. This will take a while and after successful deployment you should be able to access jbpm-console (or kie-wb) on following locations:


    the context root (jbpm-console-ng) depends on the name of the war file that was deployed so if the filename will be jbpm-console-ng-jboss7.war then the context root will be jbpm-console-ng-jboss7. Same rule apply to kie-wb deployment.

    And that's it - you should have fully operational jbpm cluster environment!!!

    Obviously in normal scenarios you would like to hide the complexity of different urls to the application from end users (like putting in front of them a load balancer) but I explicitly left that out of this example to show proper behavior of independent cluster nodes.

    Next post will go into details on how different components play smoothly in cluster, to name few:
    • failover - in case cluster node goes down
    • timer management - how does timer fire in cluster environment
    • session management - auto reactivation of session on demand
    • etc
    As we are still in development mode, please share your thoughts on what would you like to see in cluster support for jBPM, your input is most appreciated!

    Make your work asynchronous

    $
    0
    0
    Asynchronous execution as part of a business process is common requirement. jBPM has had support for it via custom implementation of WorkItemHandler. In general it was as simple as providing async handler (is it as simple as it sounds?) that delegates the actual work to some worker e.g. a separate thread that proceeds with the execution.

    Before we dig into details on jBPM v6 support for asynchronous execution let's look at what are the common requirements for such execution:

    • first and foremost it allows asynchronous execution of given piece of business logic
    • it allows to retry in case of resources are temporarily unavailable e.g. external system interaction
    • it allows to handle errors in case all retries has been attempted
    • it provides cancelation option
    • it provides history log of execution
    When confronting these requirements with the "simple async handler" we can directly notice that all of these would need to be implemented all over again by different systems. So that is not so appealing, isn't?

    jBPM executor to the rescue 

    Since version 6, jBPM introduces new component called jbpm executor which provides quite advanced features for asynchronous execution. It delivers generic environment for background execution of commands. Commands are nothing more than business logic encapsulated with simple interface. It does not have any process runtime related information, that means no need to complete work items, or anything of that sort. It purely focuses on the business logic to be executed. It receives data via CommandContext and returns results of the execution with ExecutionResults. The most important rule for both input and output data is - it must be serializable.
    Executor covers all requirements listed above and provides user interface as part of jbpm console and kie workbench (kie-wb) applications.

    Illustrates Jobs panel in kie-wb application

    Above screenshot illustrates history view of executor's job queue. As can be seen on it there are several options available:
    • view details of the job
    • cancel given job
    • create new job
    With that quite few things can already be achieved. But what about executing logic as part of a process instance - via work item handler?

    Async work item handler

    jBPM (again since version 6) provides an out of the box async work item handler that is backed by the jbpm executor. So by default all features that executor delivers will be available for background execution within process instance. AsyncWorkItemHandler can be configured in two ways:
    1. as generic handler that expects to get the command name as part of work item parameters
    2. as specific handler for given type of work item - for example web service
    Option number 1 is by default configured for jbpm console and kie-wb web applications and is registered under async name in every ksession that is bootstrapped within the applications. So whenever there is a need to execute some logic asynchronously following needs to be done at modeling time (using jbpm web designer):
    • specify async as TaskName property 
    • create data input called CommandClass
    • assign fully qualified class name for the CommandClass data input
    Next follow regular way to complete process modeling. Note that all data inputs will be transferred to executor so they must be serializable.
    Illustrates assignments for an async node (web service execution)

    Second option allows to register different instances of AsyncWorkItemHandler for different work items. Since it's registered for dedicated work item most likely the command will be dedicated to that work item as well. If so CommandClass can be specified on registration time instead of requiring it to be set as work item parameters. To register such handlers for jbpm console or kie-wb additional class is required to inform what shall be registered. A CDI bean that implements WorkItemHandlerProducer interface needs to be provided and placed on the application classpath so CDI container will be able to find it. Then at modeling time TaskName property needs to be aligned with those used at registration time.

    Ready to give it a try?

    To see this working it's enough to give a try to the latest kie-wb or jbpm console build (either master or CR2). As soon as application is deployed, go to Authoring perspective and you'll find an async-examples project in jbpm-playground repository. It comes with three samples that illustrates asynchronous execution from within process instance:
    • async executor
    • async data executor
    • check weather
    Async executor is the simplest execution process that allows execute commands asynchronously. When starting a process instance it will ask for fully qualified class name of the command, for demo purpose use org.jbpm.executor.commands.PrintOutCommand which is similar to the SystemOutWorkItemHandler that simple prints out to logs the content of the CommandContext. You can leave it empty or provide invalid command class name to see the error handling mechanism (using boundary error event).

    Async data executor is preatty much same as Async executor but it does operate on custom data (included in the project - User and UserCommand). On start process form use org.jbpm.examples.cmd.UserCommand to invoke custom command included in the project.

    Check weather is asynchronous execution of a web service call. It checks weather for any U.S. zip code and provides results as a human task. So on start form specify who should receive user task with results and what is the zip code of the city you would like to get weather forecast for.


    Start Check weather process with async web service execution


    And that's it, asynchronous execution is now available out of the box in jBPM v6. 

    Have fun and as usual keep the comments coming so we can add more useful features!

    jBPM6 samples with RuntimeManager

    $
    0
    0
    jBPM6 introduces new module - jbpm-runtime-manager - that aims at significantly simplify management of:

    • KnowledgeBase (KieBase)
    • KnowledgeSession (KieSession)
    Moreover it allows to use predefined strategies for handling knowledge sessions and its relation to process instance. By default jBPM6 comes with three strategies:
    • Singleton - single knowledge session that will execute all process instances
    • Per Request - every request (which is in fact call to getRuntimeEngine) will get new knowledge session
    • Per Process Instance - every process instance will have its dedicated knowledge session for the entire life time
    To make use of the strategy it's enough to create proper type of RuntimeManager. jBPM6 allows to obtain instance of the RuntimeManager in various ways, this article will provide hands on information on how it can be achieved. 
    With jBPM6 a whole new way of building application has been provided - Context and Dependency Injection is now available for users to build application and bring the power of jBPM to next level. Obviously CDI is not the only way to make use of jBPM - the regular API based approach is still available and fully functional.

    CDI integration

    jBPM6 tooling (like jbpm console or kie-wb) is built on CDI and thanks to that set of services has been provided to ease the development for custom applications that are based on CDI. These services are bundled as part of jbpm-kie-services and provides compact solution to most of the required operations to put BPM into your application:
    • deployment service - deploys and undeploys units (kjars) into the process engine
    • runtime service - gives access to state of the process engine, like retrieve process definitions, process instances, history log, etc
    • bpmn2 data service - gives access to details of the process definition taken from BPMN2 xml
    • form provider service - gives access to forms for processes and tasks
    So whenever custom application is built with CDI these services are recommended way to go to get most of the power of CDI and jBPM. Moreover it's the safest way too as it is used in jBPM tooling so it got fair amount of testing to secure it does what is expected.

    Using jbpm-kie-services is not a mandatory to be able to use jBPM6 in CDI environment but it does have some advantages:
    - allow to maintain multiple RuntimeManagers within single execution environment
    - allow independent deploy and undeploy of units (kjar) without server/application restart
    - allow to select different strategies for different units

    See jbpm-sample-cdi-services project for details.

    while these are all appealing add-ons they are not always needed, especially if application requires only single RuntimeManager instance to be included in the application. If that's the case we can let CDI container to create the RuntimeManager instance for us. That is considered second approach to CDI integration where only single instance of RuntimeManager is active and it's managed completely by the CDI container. Application needs to provide environment that RuntimeManager will be based on. 

    See jbpm-sample-cdi project for details.

    API approach

    Last but not least is the regular API based approach to jBPM6 and RuntimeManager. It expects to be built by the application and in fact provides all configuration options. Moreover this is the simplest way to extend the functionality of RuntimeManager that comes out of the box.

    See jbpm-sample project for details.

    This article is just an introduction to the way jBPM6 and RuntimeManager can be used. More detailed articles will follow to provide in-depth information on every option given here (cdi with services, pure cdi, api).

    If you have any aspects that you would like to see in next articles (regarding runtime manager and CDI) just drop a comment here and I'll do by best to include them.

    jBPM empowers Magnolia CMS

    $
    0
    0
    I am glad to inform that Magnolia CMS uses jBPM 5 as their default work flow engine in version 5. Just two weeks ago I had a pleasure to talk about jBPM (both v5 and v6) at Magnolia conference in Basel, Switzerland. This was a great event that I recommend everyone that is interested in CMS.

    Together with Espen from Magnolia team, we made a really nice presentation about both jBPM and Magnolia Workflow that utilizes jBPM.

    Here you can find the presentation and as soon as recording will be available I'll link it here as well.

    Stay tuned for more updates and information about jBPM :)

    jBPM 6 workshops

    $
    0
    0
    I would like to inform that there will be some workshops regarding upcoming jBPM version 6 where you can gather some insight into what's in it for you.

    There are currently two workshops scheduled:

    • 12 of October in Warsaw, Poland
    • 23-24 of October in London, UK

    Workshop in Poland will be carried as part of Warsjava 2013 conference where besides "Introduction to jBPM 6" a lot more can be found. Unfortunately the conference is by default in Polish but I guess when there will be any non polish speaking attendees there won't be any issues to take it in English. I'll be giving the presentation and workshop for jBPM 6 at this year's Warsjava.

    Workshops in London will be taken obviously in English and there will be lot of possibilities to learn a lot about the development in the projects. Presented by Mauricio "Salaboy" Salatino and Michael Anstis so an event you can't miss.

    Please take a look at the content and register as places are limited.

    jBPM 6 first steps

    $
    0
    0
    This post is about to give a very quick introduction to how users can take their first steps in jBPM 6. Using completely web tooling to build up:

    • processes
    • rules
    • process and task forms
    • data model
    With just three simple examples you will learn how easy and quickly you can start with BPM. So let's start.

    The simples process

    First process is illustrating how you move around in KIE workbench web application. Where to:
    • create repository
    • create project
    • configure Knowledge Base, KnowledgeSession
    • create process
    • build and deploy
    • execute process and work with user task

    Custom data and forms

    Next let's explore more and start with bit advanced features like:

    • building custom data model that will be used as process variable
    • make use of process variables in user task
    • define custom forms for process and tasks
    • edit and adjust your process and task forms

    Make use of business rules and decisions in your process

    At the end let's make the process be more efficient by applying business rules in the process and then use gateways as deception points. This example introduces:

    • use of business rule task
    • define business rules with Drools
    • use XOR gateway to split between different paths in the process


    Important to note that business rule task can automatically insert and retract process variables using data input and output of business rule task. When defining them make sure that both data input and output are named exactly the same to allow engine to properly retract the facts on business rule task completion.

    That would be all for the first steps with jBPM 6. Stay tuned for more examples and videos!

    As usual comments more than welcome.


    jBPM 6 with spring

    $
    0
    0
    As some of you might noticed jBPM got quite few improvements for version 6.0, to just name few:

    • RuntimeManager
    • enhanced timer service
    • new deployment model - based on kjar and maven
    • brand new tooling 
    • see release notes for more
    there is (as always) still room for improvements. After 6.0 went out, we started to look at how we could ease the usage of jBPM in Spring based applications on top of the new API. Lots of the API is now based on the concept of fluent API which is not really Spring friendly.
    Before we dive into details of the API and how it can be used in Spring let's look at possible usage scenarios that users might be interested in:

    • Self managed process engine
    This is the standard (and the simplest) way to get up and running with jBPM in your application. You only configure it once and run as part of the application. With the RuntimeManager usage, both process engine and task service will be managed in complete synchronization, meaning there is no need from end user to deal with "plumbing" code to make these two work together. 
    • Shared task service
    There could be a need in certain situations, that a single instance of a TaskService is used. This approach give more flexibility in configuring the task service instance as it's not behind RuntimeManager. Once configured it's then used by the RuntimeManager when requested. RuntimeManager in such situation will not create new instances of task service as it's done with self managed process engine approach.

    To provide spring based way of setting up jBPM, few factory beans where added:

    org.kie.spring.factorybeans.RuntimeEnvironmentFactoryBean


    Factory responsible for producing instances of RuntimeEnvironment that are consumed by RuntimeManager upon creation. It allows to create following types of RuntimeEnvironment (that mainly means what is configured by default):
    • DEFAULT - default (most common) configuration for RuntimeManager
    • EMPTY - completely empty environment to be manually populated
    • DEFAULT_IN_MEMORY - same as DEFAULT but without persistence of the runtime engine
    • DEFAULT_KJAR - same as DEFAULT but knowledge asset are taken from KJAR identified by releaseid or GAV
    • DEFAULT_KJAR_CL - build directly from classpath that consists kmodule.xml descriptor
    Mandatory properties depends on the selected type but knowledge information must be given for all types. That means that one of the following must be provided:
    • knowledgeBase
    • assets
    • releaseId
    • groupId, artifactId, version
    Next for DEFAULT, DEFAULT_KJAR, DEFAULT_KJAR_CL persistence needs to be configured:
    • entity manager factory
    • transaction manager
    Transaction Manager must be Spring transaction manager as based on its presence entire persistence and transaction support is configured.  Optionally EntityManager can be provided to be used instead of always creating new one from EntityManagerFactory - e.g. when using shared entity manager from Spring. All other properties are optional and are meant to override the default given by type of the environment selected.

    org.kie.spring.factorybeans.RuntimeManagerFactoryBean

    FactoryBean responsible for creation of RuntimeManager instances of given type based on provided runtimeEnvironment. Supported types:
    • SINGLETON
    • PER_REQUEST
    • PER_PROCESS_INSTANCE
    where default is SINGLETON when no type is specified.  Every runtime manager must be uniquely identified thus identifier is a mandatory property. All instances created by this factory are cached to be able to properly dispose them using destroy method (close()).

    org.kie.spring.factorybeans.TaskServiceFactoryBean

    Creates instance of TaskService based on given properties. Following are mandatory properties that must be provided:
    • entity manager factory
    • transaction manager
    Transaction Manager must be Spring transaction manager as based on its presence entire persistence and transaction support is configured. Optionally EntityManager can be provided to be used instead of always creating new one from EntityManagerFactory - e.g. when using shared entity manager from Spring. In addition to above there are optional properties that can be set on task service instance:
    • userGroupCallback - implementation of UserGroupCallback to be used, defaults to MVELUserGroupCallbackImpl
    • userInfo - implementation of UserInfo to be used, defaults to DefaultUserInfo
    • listener - list of TaskLifeCycleEventListener that will be notified upon various operations on tasks
    This factory creates single instance of task service only as it's intended to be shared across all other beans in the system.


    Now we know what components we are going to use, so it's time to look at how we could actually configure our spring application to take advantage on jBPM version 6. We start with the simple self managed approach where we configure single runtime manager with inline resources (processes) added.

    1. First we setup entity manager factory and transaction manager:

    <beanid="jbpmEMF"class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
    <propertyname="persistenceUnitName"value="org.jbpm.persistence.spring.jta"/>
    </bean>

    <beanid="btmConfig"factory-method="getConfiguration"class="bitronix.tm.TransactionManagerServices">
    </bean>

    <beanid="BitronixTransactionManager"factory-method="getTransactionManager"
    class="bitronix.tm.TransactionManagerServices"depends-on="btmConfig"destroy-method="shutdown"/>

    <beanid="jbpmTxManager"class="org.springframework.transaction.jta.JtaTransactionManager">
    <propertyname="transactionManager"ref="BitronixTransactionManager"/>
    <propertyname="userTransaction"ref="BitronixTransactionManager"/>
    </bean>
    with this we have ready persistence configuration that gives us:
    • JTA transaction manager (backed by bitronix - for unit tests or servlet containers)
    • entity manager factory for persistence unit named org.jbpm.persistence.spring.jta
    2. Next we configure resource that we are going to use - business process

    <beanid="process"factory-method="newClassPathResource"class="org.kie.internal.io.ResourceFactory">
    <constructor-arg>
    <value>jbpm/processes/sample.bpmn</value>
    </constructor-arg>
    </bean>
    this configures single process that will be available for execution - sample.bpmn that will be taken from class path. This is the simplest way to get your processes included when trying out jbpm.

    3. Then we configure RuntimeEnvironment with our infrastructure (entity manager, transaction manager, resources)


    <beanid="runtimeEnvironment"class="org.kie.spring.factorybeans.RuntimeEnvironmentFactoryBean">
    <propertyname="type"value="DEFAULT"/>
    <propertyname="entityManagerFactory"ref="jbpmEMF"/>
    <propertyname="transactionManager"ref="jbpmTxManager"/>
    <propertyname="assets">
    <map>
    <entrykey-ref="process"><util:constantstatic-field="org.kie.api.io.ResourceType.BPMN2"/></entry>
    </map>
    </property>
    </bean>
    that gives us default runtime environment ready to be used to create instance of a RuntimeManager.

    4. And finally we create RuntimeManager with the environment we just setup

    <beanid="runtimeManager"class="org.kie.spring.factorybeans.RuntimeManagerFactoryBean"destroy-method="close">
    <propertyname="identifier"value="spring-rm"/>
    <propertyname="runtimeEnvironment"ref="runtimeEnvironment"/>
    </bean>

    with just four steps you are ready to execute your processes with Spring and jBPM 6, utilizing EntityManagerFactory and JTA transaction manager.

    As an optional step, especially useful when testing you can create AuditLogService to get history information of your process executions.


    <beanid="logService"class="org.jbpm.process.audit.JPAAuditLogService">
    <constructor-arg>
    <refbean="jbpmEMF"/>
    </constructor-arg>
    </bean>

    Complete spring configuration file can be found here.

    This is just one configuration setup that jBPM 6 supports - JTA transaction manager and EntityManagerFactory, others are:
    • JTA and SharedEntityManager
    • Local Persistence Unit and EntityManagerFactory
    • Local Persistence Unit and SharedEntityManager
    What is important to note here is that there is no need to setup TaskService at all, some part of it like user group callback, can be configured via RuntimeEnvironment but the whole setup is done automatically by RuntimeManager. No need to worry about that.

    Although if you need more control over TaskService instance you can set it up yourself and let RuntimeManager use it instead of creating its own instances.

    To do so, you start the same as for self managed process engine case, follow step 1 and 2. Next we configure task service

    <beanid="taskService"class="org.kie.spring.factorybeans.TaskServiceFactoryBean"destroy-method="close">
    <propertyname="entityManagerFactory"ref="jbpmEMF"/>
    <propertyname="transactionManager"ref="jbpmTxManager"/>
    <propertyname="listeners">
    <list>
    <beanclass="org.jbpm.services.task.audit.JPATaskLifeCycleEventListener"/>
    </list>
    </property>
    </bean>
    with that we add an extra task life cycle listener to save all task operations as log entires.

    Then, step 3 needs to be slightly enhanced to set task service instance in the runtime environment so RuntimeManager can make use of it


    <beanid="runtimeEnvironment"class="org.kie.spring.factorybeans.RuntimeEnvironmentFactoryBean">
    <propertyname="type"value="DEFAULT"/>
    <propertyname="entityManagerFactory"ref="jbpmEMF"/>
    <propertyname="transactionManager"ref="jbpmTxManager"/>
    <propertyname="assets">
    <map>
    <entrykey-ref="process"><util:constantstatic-field="org.kie.api.io.ResourceType.BPMN2"/></entry>
    </map>
    </property>
    <propertyname="taskService"ref="taskService"/>
    </bean>
    that will disable task service creation by RuntimeManager and always use this single shared instance.
    Step 4 (creating RuntimeManager) is exactly the same.

    Look at the example configuration files and test cases for more details on how they are utilized.

    Please also note that factory beans for spring integration are currently scheduled to be released with 6.1.0 version of jBPM but I would like to encourage you to give it a try before so in case something is not working or missing we will have time to fix it and let it out with 6.1.0.Final. So all hands on board and let's spring it :)

    how to deploy processes in jBPM 6?

    $
    0
    0
    After release of 6.0 of jBPM, there were number of questions coming from community about how processes can be deployed into the new and shiny jbpm console?

    So let's start with short recap on how the deployment model look like in jBPM 6. In version 5.x processes were stored in so called packages produced by Guvnor and next downloaded by jbpm console for execution using KnowledgeAgent. Alternatively one could drop their process files (bpmn2 files) into a predefined directory that was scanned on the jbpm console start. That was it.

    That enforces users to always use Guvnor when dynamic deployment was needed. Although there is nothing wrong with it, actually that was recommended approach but not everyone was happy with that setup.

    Version 6, on the other hand moves away from proprietary packages in favor of, well known and mature, Apache Maven based packaging - known as knowledge archives - kjar. What does that mean? First of all, processes, rules etc (aka business assets) are now part of a simple jar file built and managed by Maven. Along the business assets, java classes and other file types are stored in the jar file too. Moreover, as any other maven artifact, kjar can have defined dependencies on other artifacts including other kjars.
    What makes the kjar special when compared with regular jars? It is a single descriptor file kept inside META-INF directory of the kjar - kmodule.xml. That descriptor allows to define:

    • knowledge bases and their properties
    • knowledge sessions and their properties
    • work item handlers
    • event listeners
    By default, this descriptor is empty (just kmodule root element) and is considered as marker file. Whenever a runtime component (such as jbpm console) is about to process kjar it looks up kmodule.xml to build its runtime representation. See documentation for more details about kmodule.xml and kjars.

    Alright, now we know bit more about what is actually placed on runtime environment - kjar. So how we can deploy kjar into running jbpm console? There are several ways:

    Design and build your kjar inside jbpm console

    The easiest way is to actually use jbpm console to completely build the kjar. For that purpose there is entire perspective available - Authoring perspective - that consist of quite big set of editors tailored for various asset types.

    First, you have to have repository created where your projects (after they are built they become kjars) will be stored. When running the demo setup of jbpm console (installed by jbpm installer), you will have two repositories already available - jbpm-playground and uf-playground. You can use any of these or create new repository.
    Once you have repository available, create new item - a project - you need to specify GAV (GroupId, ArtifactId, Version) to name your project.
    Next you create business assets in it, like business processes, rules, data model, forms, etc. And now we are at the stage where we should build and deploy our project into runtime. Nothing simpler than that - just press "Build & Deploy" button and you're ready to rock!

    Is that really that simple?! In most of the cases, yes, it is really that simple. But you need to be aware of several rules (convention over configuration) that drive the build and deploy. First rule is that everything needs to be properly designed - processes, rules, etc - that is the build phase that catches any compilation or validation errors and provides feedback to the user via Problems Panel.
    Assuming, all assets are built successfully, the deploy phase comes into the picture. Deploy phase is actually a two step process:
    • Maven phase - 
      • it installs build project (now it's kjar already) into Maven local repository (usually ~/.m2/repository but it can be configured with standard maven way - settings.xml)
      • deploys built project into jbpm console embedded Maven repository - it's a remote repository accessible over http and can be declared in pom.xml, settings.xml as any other maven repository
    • Runtime phase
      • once Maven phase is completed successfully, jbpm console will attempt to deploy the kjar into runtime environment for execution. Here are few requirements to make this happen:
        • kmodule.xml needs to be empty - which it is by default unless you edited it via Project Editor
        • kmodule.xml must have at least one knowledge base and stateful knowledge session defined and marked as default
    When both phases are successfully completed, your kjar is deployed to runtime environment and ready for execution. Simply go to Process Management --> Process Definitions to examine what's there and start executing your just deployed processes.

    So that's first and the easiest way to get started with deployments in jBPM 6.

    Build project in IDE and push to console for build and deploy

    Another approach would be when you prefer to do the heavy work in your IDE like Eclipse (since the modeling capabilities - bpmn2 modeler - is only available in Eclipse).  So you do pretty much similar steps, although no need to create repository here but clone an existing one from jbpm console instead.  So you first start with cloning of an existing repository. 

    git clone ssh://{jbpmconsole-host}:{port}/{repository-name}

    Then create Maven project - you can actually do that with jBPM Project wizard in eclipse, that creates simple Maven project with sample business process and executable class in it to get you started much faster. 

    Note: make sure you place the project in the cloned repository structure so it can be later on pushed back.

    It declares dependencies to jbpm-test module to be able to execute the sample process.
    Once you have a mavenized project, you're ready to start working on your business assets, data model and more. 
    When done, you're ready to push your project into jbpm console so it can be built and deployed to runtime environment. To do so, you need to use any GIT tool that allows you to pull and push changes from your working copy into the master repository. To add all files in your working copy into commit index:

    git add -A

    then examine if you haven't added too much like the target folder, if so create or edit .gitignore file to exclude unneeded files. And commit:

    git commit -m "my first jbpm project"

    once committed, push it to origin

    git push origin master

    now go into jbpm console Authoring perspective, and you should see you project in the repository, it's ready to be build and deployed. Just follow same step from the first approach to build and deploy it. 
    That was second approach to deploying business assets into jbpm console version 6. Somehow in between developers and business users. Might also be seen as collaboration way - where initially business users create high level processes, rules etc and then developers step in and add implementation details and some "glue code" to make it fully executable.

    Build and deploy to Maven from IDE

    This one focuses completely on developers and allows to actually do the work without being too much aware of jbpm console. So here developers build regular maven projects that include business assets, java classes, forms and then add the kmodule.xml to make the jar become kjar. Everything is done in IDE of developer choice. Same goes for version control system, it does not have to be git any more, in this case. That is because, jbpm console won't be used as source management tool for these projects but it will be used only for pure execution capabilities.

    Once you're done with development, you simply build the project with maven (mvm clean install). That makes it directly available for any other components that would like to use it on your local machine. So if you're running jbpm console on your machine you can directly skip to section deployment (three paragraphs below ;))

    When jbpm console is running on remote host, you have two ways to make it aware of your artifacts built externally:
    • deploy (using maven) your projects into jbpm console Maven repository - as this is like any other repository you can use maven deploy goal after you have defined that repository either in your pom.xml or settings.xml
    • make jbpm console maven installation aware of any external maven repositories it should consider while deploying kjars
    The first one, deploy to maven repository, does not have anything special, it's as simple as defining the repository in settings.xml so it can be successfully contacted and the artifact can be stored when running mvm clean install deploy.
    Then the second approach is again standard maven way. On the machine (and user) that jbpm console is running, you need to add your main maven repository into settings.xml so whenever jbpm console will attempt to deploy the kjar it will look up for it in your maven repository.

    With all these steps, jbpm console is now capable of finding the kjars that are outside of it's local maven repository so it can download them when needed. You can now go to jbpm console Deploy --> Deployments perspective where you can add new deployment units manually.
    It's as simple as providing GAV of the project you want to deploy and optionally knowledge base and knowledge session names if you defined more than what is there by default.
    In addition to that, you can select runtime strategy that fits your requirements for that given kjar - chose one from Singleton, Per Request or Per Process instance.

    That concludes deployment options available in jBPM version 6. It promotes well known standards defined by maven and allow various ways of integrating with the jbpm console functionality. You, as a user, are in the control how you work with the tooling where you can leverage it's full power to do everything over the web, integrate with GIT server, to do everything externally and use it only for execution.

    Hope that puts some light on the way you can use jBPM 6 out of the box and empowers your day to day work. As usual, ending with the same sentence: all comments and ideas are more than welcome.






    jBPM 6 - store your process variables anywhere

    $
    0
    0
    Most of jBPM users is aware of how jBPM stores process variable but let's recap it here again just for completeness.

    NOTE: this article covers jBPM that uses persistence as without persistence process variables are kept in memory only.

     jBPM puts single requirement on the objects that are used as process variables:
    • object must be serializable (simply must implement java.io.Serializable interface)
    with that jBPM engine is capable to store all process variables as part of process instance using marshaling mechanism that is backed by Google Protocol Buffers. That means actual instances are marshaled into bytes and stored in data base. This is not always desired especially in case of objects that are actually not owned by the process instance. For example:
    • JPA entities of another system
    • documents stored in document/content management system 
    • etc
    Luckily, jBPM has a solution to that as well called pluggable Variable Persistence Strategy. Out of the box jBPM provides two strategies:
    • serialization based, mentioned above that actually works on all object types as long as they are serializable (org.drools.core.marshalling.impl.SerializablePlaceholderResolverStrategy)
    • JPA based that works on objects that are entities (org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy)
    Let's spend some time on the JPA based strategy as it might become rather useful in many cases where jBPM is used in embedded mode. Consider following scenario where our business process uses entities as process variables. The same entities might be altered from outside of the process and we would like to keep them up to date within the process as well. To do so, we need to use JPA based strategy for variable persistence that is capable of storing entities in data base and then retrieving them back.
    To configure variable persistence strategy you need to place it into the environment that is the used when creating knowledge sessions. Note that the order of the strategies is important as they will be evaluated which one will be used in the order they are given. best practice is to always set the serialization based strategy to be the last one. 
    An example how you can use it with RuntimeManager:


    // create entity manager factory
    EntityManagerFactory emf = Persistence.createEntityManagerFactory("org.jbpm.sample");

    RuntimeEnvironment environment = 
         RuntimeEnvironmentBuilder.Factory.get().newDefaultBuilder()
    .entityManagerFactory(emf)
    .addEnvironmentEntry(EnvironmentName.OBJECT_MARSHALLING_STRATEGIES, 
              new ObjectMarshallingStrategy[]{
    // set the entity manager factory for jpa strategy so it 
    // know how to store and read entities     
    newJPAPlaceholderResolverStrategy(emf),
    // set the serialization based strategy as last one to
    // deal with non entities classes
    newSerializablePlaceholderResolverStrategy( 
                              ClassObjectMarshallingStrategyAcceptor.DEFAULT)
    })
    .addAsset(ResourceFactory.newClassPathResource("cmis-store.bpmn"), 
                   ResourceType.BPMN2)
    .get();
    // create the runtime manager and start using entities as part of your process RuntimeManager manager = 
         RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);

    Once we know how to configure it, let's take some time to understand how it actually works. First of all, every process variable on the time when it's going to be persisted will be evaluated on the strategy and it's up to the strategy to accept or reject given variable, if accepted only that strategy will be used to persist the variable, if rejected other strategies will be consulted.

    Note: make sure that you add your entity classes into persistence.xml that will be used by the jpa strategy

    JPA will accept only classes that declares a field with @Id annotation (javax.persistence.Id) that allows us to ensure we will have an unique id to be used when retrieving the variable.
    Serialization based one simply accepts all variables by default and thus it should be the last strategy inline. Although this default behavior can be altered by providing other acceptor implementation.

    Once the strategy accepts the variable it performs marshaling operation to store the variable and unmarshaling to retrieve the variable from the back end store (of the type it supports).

    In case of JPA, marshaling will check if entity is already stored entity - has id set - and:

    • if not, it will persist the entity using entity manager factory that was assigned to it
    • if yes, it will merge it with the persistence context to make sure up to date information is stored
    when unmarshaling it will use the unique id of the entity to load it from the database and provide as process variable. It's that simple :)

    With that, we quickly covered the default (serialization based) strategy and JPA based strategy. But the title of this article says we can store variables anywhere, so how's that possible?
    It's possible because of the nature of variable persistence strategies - they are pluggable. We can create our own and simply add it to the environment and process variables that meets the acceptance criteria of the strategy will be persisted by that given strategy. To not leave you with empty hands let's look at another implementation I created for purpose of this article (although when working on it I believe it will become more than just example for this article).

    Implementing variable persistence strategy is actually very simple, it's a matter of implementing single interface: org.kie.api.marshalling.ObjectMarshallingStrategy

    publicinterfaceObjectMarshallingStrategy{

    publicbooleanaccept(Object object);

    publicvoidwrite(ObjectOutputStream os,
    Object object)throws IOException;

    public Object read(ObjectInputStream os)throws IOException, ClassNotFoundException;


    publicbyte[]marshal( Context context,
    ObjectOutputStream os,
    Object object )throws IOException;

    public Object unmarshal( Context context,
    ObjectInputStream is,
    byte[] object,
    ClassLoader classloader )throws IOException, ClassNotFoundException;

    public Context createContext();
    }

    the most important methods for us are:

    • accept - decides if this strategy will be responsible for persistence of given object
    • marshal - performs operation to store process variable
    • unmarshal - performs operation to retrieve process variable
    the other remaining are for backward compatibility reasons with old marshaling framework prior to protobuf, so it's not mandatory to be implemented but it's worth to put the logic there too as most likely it will be same as for marshal (write) and unmarshal (read).

    So the mentioned example implementation is for storing and retrieving process variables as document from Content/Document management systems that support access to the repository using CMIS. I used Apache Chemistry as the integration component that can easily talk to CMIS enabled systems like for example Alfresco.


    So first bit of requirements:

    • process variables must be of certain type to be stored in the content repository
    • documents (process variables stored in cms) can be:
      • created
      • updated (with versioning)
      • read
    • process variables must be kept up to date
    so all these sounds simple and of course that's the point to keep it simple at this point. CMS can be used for much more but we wanted to get started and then enhance it if needed. So the implementation of strategy org.jbpm.integration.cmis.impl.OpenCMISPlaceholderResolverStrategy supports following:
    • when marshaling
      • create new documents if it does not have object id assigned yet
      • update document if it has already object id assigned
        • by overriding existing content
        • by creating new major version of the document 
        • by creating new minor version of the document
    • when unmarshaling
      • load the content of the document based on given object id
    So you can actually use this strategy for:
    • creating new documents from the process based on custom content
    • update existing documents with custom content
    • load existing documents into process variable based on object id only
    These are very high level details but let's look at the actual code that does that "magic", let's start with marshal logic - note that is bit simplified for readability here and complete code can be found in github.


    publicbyte[]marshal(Context context, ObjectOutputStream os, Object object)throws IOException {
    Document document =(Document) object;
    // connect to repository
    Session session = getRepositorySession(user, password, url, repository);
    try{
    if(document.getDocumentContent()!=null){
    // no object id yet, let's create the document
    if(document.getObjectId()==null){
    Folder parent =...// find folder by path
    if(parent ==null){
    parent =..// create folder
    }
    // now we are ready to create the document in CMS
    }else{
    // object id exists so time to update
    }
    }
    // now nee need to store some info as part of the process instance
    // so we can later on look up, in this case is the object id and class
    // that we use as process variable so we can recreate the instance on read
    ByteArrayOutputStream buff =new ByteArrayOutputStream();
    ObjectOutputStream oos =new ObjectOutputStream( buff );
    oos.writeUTF(document.getObjectId());
    oos.writeUTF(object.getClass().getCanonicalName());
    oos.close();
    return buff.toByteArray();
    }finally{
    // let's clear the session in the end
    session.clear();
    }
    }

    so as you can see, it first deals with the actual storage (in this case CMIS based repository) and then save some small details to be able to recreate the actual object instance on reading. It stores objectId and fully qualified class name of the process variable. And that's it. Process variable of type Document will be stored inside content repository.

    Then let's look at the unmarshal method:


    public Object unmarshal(Context context, ObjectInputStream ois,byte[] object, ClassLoader classloader)throws IOException, ClassNotFoundException {
    DroolsObjectInputStream is =new DroolsObjectInputStream(new ByteArrayInputStream( object ), classloader );
    // first we read out the object id and class name we stored during marshaling
    String objectId = is.readUTF();
    String canonicalName = is.readUTF();
    // connect to repository
    Session session = getRepositorySession(user, password, url, repository);
    try{
    // get the document from repository and create new instance ot the variable class
    CmisObject doc =.....
    Document document =(Document) Class.forName(canonicalName).newInstance();
    // populate process variable with meta data and content
    document.setObjectId(objectId);
    document.setDocumentName(doc.getName());
    document.setFolderName(getFolderName(doc.getParents()));
    document.setFolderPath(getPathAsString(doc.getPaths()));
    if(doc.getContentStream()!=null){
    ContentStream stream = doc.getContentStream();
    document.setDocumentContent(IOUtils.toByteArray(stream.getStream()));
    document.setUpdated(false);
    document.setDocumentType(stream.getMimeType());
    }
    return document;
    }catch(Exception e){
    thrownewRuntimeException("Cannot read document from CMIS", e);
    }finally{
    // do some clean up...
    is.close();
    session.clear();
    }
    }

    nothing more that the logic to get ids and class name so the instance can be recreated and load the document from cms repository and we're done :)

    Last but not least, the accept method.


    publicbooleanaccept(Object object){
    if(object instanceof Document){
    returntrue;
    }
    returnfalse;
    }

    and that is all that is needed to actually implement your own variable persistence strategy. The only thing left is to register the strategy on the environment so it will be evaluated when storing/retrieving variables. It's done the same way as described for JPA based on.

    Complete source code with some tests showing complete usage case from process can be found here. Enjoy and feel free to provide feedback, maybe it's worth to start producing repository of such strategies so we can have rather rich set of strategies to be used...

    Reuse your business assets with jBPM 6

    $
    0
    0
    As described in the article about deployment model in jBPM 6, business assets are included in so called knowledge archives - kjars. Kjar is nothing more than regular jar file with knowledge descriptor - kmodule.xml that is under control of Apache Maven. But there is more in this...

    Since kjars are managed by maven one kjar can declare other kjar as its dependency. Which means that assets included in the dependent one will be available for execution as well. That is all available when working with jbpm console (aka kie workbench). So to provide more information on this let's look at an example.

    Use case definition

    There is a need to prepare a simple registration process that will ask user who starts the process for basic personal information like name, age. Then there will be some business rules that will evaluate if that person is adult or a teenager. Once completed it will be presented to reviewer to see the details of the evaluation. Last but not least is to proceed with actual registration in the system. So we can see that there is part of this logic that might be a very well considered a reusable - part that is responsible for gathering information about a person.

    So let's design it this way:

    • first project - reusable project - will actually deal only with gathering personal information and presenting that to verifying personnel after business rules were evaluated.
               As you can see, besides business assets data model is included in reusable-project so it can
               be used by projects that declare it as dependency, same as with any other Maven based project.
    • second project - top project - will provide additional process logic on top of the common collect info procedure and do registration stuff.
    So, this is the structure of the projects we are going to use to support the case described.

    What must be actually done to make this work? First of all the reusable project needs to be created as it will be a dependency of the top project so it must exists. In the reusable project we need to define knowledge base and knowledge session to disable auto deploy, as we don't want to have it on runtime as a standalone project but included in top project.  With that said we create:
    • one knowledge base (kbase) - ensure it's not marked as default
    • one stateful knowledge session (ksession) - ensure it's not marked as default
    • include all packages - use * wildcard for it
    Note: we do this to illustrate what configuration options we have here and to ensure that auto deployment to runtime environment will not be possible - no default knowledge base and knowledge session.

    Let's create this collect info process that could look like this:
    a simple process, two user tasks and business rule task. So what will it do:
    • Enter person details will collect personal information from a user - name and age
    • Evaluate will execute business rules that are responsible for marking given person as adult if (s)he is older than 21
    • Verify simply presents the results of the process
    Both rule and user tasks operate on data model, to be more specific org.jbpm.test.Person class. It was created using Data Modeler and places inside the kjar.
    Next process and tasks forms are generated and altered to ask for the right information only. Person class includes three properties:
    • name - string
    • age - integrer
    • adult - boolean
    Since we have business rules for evaluating if user is adult or teenager we don't want to ask for it via forms. So these are removed from the "Enter person details" task.
    With all that we are ready to build the project so it can be used as dependency. So, just hit the build and deploy button in Project Editor and observe Problems Panel. If everything went ok, it will display single error saying that deployment failed because it cannot find defaultKBase. That is expected as we defined knowledge base and knowledge session that is not marked as default and thus auto deploy fails. But the kjar is available in maven repository so it can be used as dependency.

    Next we create top project and add single dependency to it of reusable-project. This is done in Project Editor in Dependencies list section. You can add it from repository as it's already built. Next we need to define knowledge base and knowledge session:
    • one knowledge base (kbase-top) - ensure it's marked as default
    • one stateful knowledge session (ksession-top) - ensure it's marked as default
    • include all packages - use * wildcard for it
    • include kbase defined in reusable project - kbase
    Note: make sure that names do no collide between kjars as that will result in failing compilation of knowledge base.

    Now we are ready to create our top level process that could look like this:

    Again simple process, that will:

    • log incoming request for registration using script task - Log registration
    • invoke common part to collect info - Collect Info - by invoking the reusable project process, rules, forms etc
    • and lastly will show the outcome of the collection process for approval
    The most important part here is that Collect Info activity will use process (and other assets) from another kjar thanks to usage of maven dependencies and kbase inclusion.

    To examine this example in details you can clone the repository in your jbpm console (kie workbench) and build both projects - first reusable project and then top project. 

    This illustrates only the top of the mountain that is provided by maven dependencies and knowledge inclusion in jBPM 6. I would like to encourage you to explore this possibilities and look at options to reuse your knowledge in structured and controlled way - remember this is all standardized by maven so things like versioning are supported as well.

    This is a feature that will be available in 6.1.0 so feel free to jump into the wild right away by making use of it in nightly builds. Comments and feedback are welcome.


    jBPM 6 on WebSphere - installation notes...

    $
    0
    0
    Brand new tooling for jBPM 6 is out for a while now and it was mainly targeting the open source world so by default it was deployable to JBoss AS 7.x / JBoss EAP 6.x and Tomcat 7. Now it's time to expand more into other containers so let's start with WebSphere (version 8.5.x).

    NOTE: This article covers deployment of kie workbench (aka business central). Although this is just one option to make use of jBPM.

    So first of all, we need to get WebSphere Application Server 8.5, if you don't have one already you can download the developer edition from here, which is free to be used for development work and not for production.
    Once we have binaries downloaded, it's time to install it, I will not cover installation steps here as it's well documented on IBM documentation and there is no special requirements for WebSphere installation. Make sure that after installation you create a server profile, this article covers application server profile.

    Tip: when running on Linux you can encounter problem on deployment, actually at upload time that manifest with very weird exception mostly referring to internal classes of WebSphere. To resolve that increase number of open files for example by issuing following command before starting the server:
    ulimit -n 300000

    Once WebSphere is properly installed and verification script confirm it is up and running we can move on to configuring server instance. Let's start with process definition configuration where we can specify JVM parameters such as heap size and system properties (aka custom properties):

    Logon to WebSphere Administrative Console

    Java Virtual Machine configuration

    Go to Servers > Server Types > WAS app servers
    Go to MyServer > Server Infrastructure > Process Definition > Java Virtual Machine

    • Increase heap size 
      • Initial heap size - 1024
      • Max heap size - 2048
    NOTE: that heap settings will depend on your environment so please consider these as starting point that might require some adjustments.

    Go to Additional properties > Custom properties
    • Set JVM system properties
      • jbpm.ut.jndi.lookup set to jta/usertransaction
      • kie.services.jms.queues.response set to jms/KIE.RESPONSE.ALL 
      • kie.services.rest.deploy.async set to false

    This is the mandatory set of system properties to be set but more can be specified, check jbpm documentation for available system properties.

    Security configuration

    Go to Security > Global security
    Ensure the option Enable Application security is checked. 
    Go to Users and groups > Manage groups
    Create groups: 
    • Application groups :
      • admin, analyst, developer, manager, user
    • Task service groups
      • Accounting, HR, IT, PM

    Go to Users and groups > Manage users
    Create a single user and add to selected groups above.

    Register the SSL certificate from Github.com

    This is needed in order to enable repository cloning from Github. This is the case of the kie-wb repository examples which are fetched from Github. 

    Go to Security > SSL Certificate and Key Management > Manage endpoint security configurations
    Go to Outbound section. Go to your server node within the tree. Select the HTTP subnode.
    Go to Related Items > Key Stores and certificates
    Select the row in the table named NodeDefaultTrustStore
    Go to Additional properties > Signer certificates
    Click button Retrieve from port
    Fill out the form with these values: Host=github.com, Port=443, Alias=github.com
    Click on Retrieve signer information button, then Ok, and finally, Save to master configuration.

    Data source configuration

    Create the JDBC provider

    Left side panel, click on Resources > JDBC > JDBC Providers
    Select the appropriate scope and click on the New button.
    Fill out the form. For non-listed database types (i.e: H2, Postgres & Mysql) you need to provide the path to the JDBC driver jar plus the following class name:
    •   H2            org.h2.jdbcx.JdbcDataSource                                 
    •   Postgres   org.postgresql.xa.PGXADataSource                            
    •   Mysql      com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource 

    When you finish, click Ok. If there are no data entry errors, you should be back at the list of JDBC Providers, where you should now see your new provider displayed.

    Create the data source

    Left side panel, click on Resources > JDBC > Data sources
    Select the appropriate scope and click on the New button.
    Fill out the creation form. Set the following JNDI name jdbc/jbpm (must match the data source defined in the persistence.xml file contained in the kie-wb.war)
    Select the existing JDBC provider you created. Click Next.
    Keep clicking Next until Finish.
    Save to master configuration.
    Edit the datasource you just created and click on the Custom properties link.
    Edit and fill the appropriate values required to set-up the connection. This depends on the database type.
    •    H2            URL, user, password                                  
    •    Postgres   serverName, databaseName, portNumber, user, password 
    •    Mysql      serverName, databaseName, port, user, password       
       


    JMS Configuration

    Create Service Integration Bus that will host all the queues required by jbpm.

    Go to Service integration > Buses

    Next step is to assign bus members that will host message engine which is essentially application server instance.

    Next let's create destinations for our queues, all of type Queue
    Go to Service Integration > Busses > {bus name}
    Destination resources > Destinations

    and create following queues:
    • KIE.AUDIT
    • KIE.RESPONSE.ALL
    • KIE.SESSION
    • KIE.TASK
    Then let's create the actual JMS resources such as Connection factories, Queues, Activation specifications

    Connection factories
    Create following connection factories to allow integration over JMS
    • KIE.AUDIT - used for audit logging over JMS to make them asynchronous 
    • KIE.RESPONSE.ALL - used for returning replies after processing incoming messages
    • KIE.SESSION - used for incoming message for process operations e.g. start process
    • KIE.TASK - used for incoming messages for task operations e.g. complete task

    Queues
    Create following queues
    • KIE.AUDIT
    • KIE.RESPONSE.ALL
    • KIE.SESSION
    • KIE.TASK
    Activation specification
    Create following activation specifications
    • KIE.AUDIT
    • KIE.RESPONSE.ALL
    • KIE.SESSION
    • KIE.TASK
    Worth mentioning is that KIE.AUDIT activation specification should be additionally configured to prevent from processing message concurrently to avoid out of sync messages. Set "Maximum concurrent MDB invocations per endpoint" to 1.

    Deploy the application

    Upload the WAR file

    Go to Applications > Application types > Websphere enterprise applications

    Click on Install, select the kie-wb-was8.war file from your local filesystem. Click Next
    From here, you will be asked several deployments settings.
    You'll need to select the datasource created above as the datasource to be used by the application.



    Screen Bind listeners for message-driven beans - select for every bean Activation Specification and fill the corresponding activation specification JNDI name into Target Resource JNDI Name (e.g. jms/activation/KIE.SESSION). You may also specify Destination JNDI name using JNDI name of the appropriate JMS queue (e.g. jms/queue/KIE.SESSION).

    We also recommend to set is the context path of the webapp to kie-wb.
    Screen Map resource references to resources - for both beans provide JNDI name of KIE.RESPONSE.ALL connection factory (e.g. jms/conn/KIE.RESPONSE.ALL).


    Application settings


    Go to Applications > Application types > Websphere enterprise applications > kie-wb app > Security role to user/group mapping

    Select the five BPMS roles: admin, analyst, developer, manager, user.
    Click on Map Special Subjects and select the All Authenticated in Application's Realm option.

    Go to Applications > Application types > Websphere enterprise applications > kie-wb app > Class loading and update detection



    Ensure the following radio buttons are checked:
    • Classes loaded with local class loader first (parent last)
    • Single class loader for application
    Save the configurations to the master and restart the server.

    Dashbuilder (BAM) configuration

    Follow instructions in this article for installing the dash builder application on WebSphere Application Server.
    In addition to this you might want to reduce logging level for class 
    com.ibm.websphere.rsadapter.WSCallHelper
     - by reduce, set it to war level to avoid spamming your server log whenever this class is used.

    This can be done in Troubleshooting -> Logs and trace -> Change log details levels

    Very important to note when using both kie-wb and dash builder is that both must use same data base (in come data bases even same data base user) as dash builder depends on tables created and populated by kie-wb so dash builder to work proper (and to actually start the application correctly).

    Session management settings

    in case of running in combination with dash builder (BAM component) it's is recommended to set following session management property to avoid issues with SSO between kie-wb and dash builder.

    Go to:
    Application Servers -> {servername} -> Session management -> Custom properties

    and add custom property:
    name: InvalidateOnUnauthorizedSessionRequestException
    value: true



    Once restarted you should be able to access the kie-wb application by typing the following URL: http://localhost:9080/kie-wb (unless you used another context root at deploy time).

    Have fun and as usual all comments are more than welcome.




    Let's WildFly with jBPM6

    $
    0
    0

    Let's go into the Wild....

    It's been a while since JBoss AS7 as community was released and that's the default server used by jBPM in community distribution - when using jbpm installer but not only limited to that as it's frequently first choice to give it a try with jBPM 6.
    No wonder as it's state of the art application server leading in the industry and very developer friendly. Some time ago JBoss AS has moved to its next version called WildFly and now is available as final release - there's even already second version released - 8.1.0.Final. You can download it from here.
    Source - wildfly.org

    All details about WildFly can be found at its web site but just to highlight the most important:

    • Java EE 7
    • extremely lightweight
    • low memory footprint
    • comes with latest version of best open source projects
      • hibenrate
      • infinispan
      • resteasy
      • weld
      • IronJacamar
      • HornetQ
      • and more...
    With that short introduction, it's high time to give jBPM 6 a WildFly as well. By that I mean have kie workbench (aka jbpm console) running on full speed on WildFly application server. The version chosen is the latest one - 8.1.0.Final due to some issues found in 8.0.0.Final that limited remote capabilities - REST interface did not work properly - though this is not an issue with 8.1.0.Final.

    So jBPM comes with additional distribution of:
    • kie-wb - fully featured workbench that includes both jbpm (process) and drools (rules) authoring and runtime capabilities
    • kie-drools-wb - similar to what kie-wb is but it's stripped out from jbpm (process) capabilities and focusing mainly on rules authoring (including projects and repositories)
    there are two dedicated web application archives tailored for WildFly application server that are named as follows:
    • kie-wb-${version}-wildfly.war
    • kie-drools-wb-${version}-wildfly.war
    the assemblies are being built while this article is written so be bit patient and let jenkins build properly the artifacts so you'll be able to download it from JBoss Nexus repository. Keep in mind that these are still snapshots and might have some issues so for those that are willing to wait a bit in like a week from now 6.1.0.CR1 should be out in the wild as well.

    Installation

    Just download WildFly 8.1.0.Final (useful getting started guide), extract it and drop the dedicated wildfly distribution war file into standalone/deployments directory (you can also rename the war file to kie-wb.war to make it bound to kie-wb context path). Next same as for JBoss AS7 add users to the security domain so you can logon to the system once it's up and running:
    • use add-user.sh/add-user.bat script in JBOSS_HOME/bin
    • follow the wizard where important are the following
      • Application Realm
      • Roles: admin to have access to all capabilities of the system
    Next is to ensure that server is started with standalone-full profile:

    ./standalone.sh --server-config=standalone-full.xml

    then wait for application server to complete startup and visit http://localhost:8080/kie-wb
    And that's it - jBPM 6 is running on latest JBoss application server - WildFly 8.1.0.Final.

    Currently known issues

    As usual with first major releases of core components there are some issues, fortunately not blockers. Obviously there might be still some other so feel free to report anything that you find not working on WildFly with jBPM6 either on user forum or as jira issues.
    • class cast exceptions thrown by Errai only clean of CDI caches - see ERRAI-750 issue for details and to keep an eye on it
    • on logout there are errors written to server log about broken pipe or closed stream caused by some Errai interaction - see ERRAI-754 issue for details and to keep track of the progress
    While this issues might be bit annoying they do not cause any overall issues for the application (at least none were identified so far). The first one can he mitigated by hiding these warning via logging configuration. Add following to standalone-full.xml to hide these warnings:


           <logger category="EventDispatcher">
               <level name="ERROR"/>
           </logger>


    As always, all comments and issues are more than welcome.


    Viewing all 140 articles
    Browse latest View live