1000/1000
Hot
Most Recent
JEM, the BEE is a Java, cloud-aware application which implements a Batch Execution Environment, to help and manage the execution of jobs, described by a Job Control Language (JCL). JEM, the BEE performs the following functions:
Core applications are usually performed through batch processing,[1] which involves executing one or more batch jobs in a sequential flow.[2] The Job Entry Manager (JEM) helps receive jobs, schedule them for processing, and determine how job output is processed (like IBM JES2).
Many batch jobs are run in parallel and JCL is used to control the operation of each job. Correct use of JCL parameters allows parallel, asynchronous execution of jobs that may need access the same data sets. One goal of a JEM is to process work while making the best use of system resources. To achieve this goal, resource management is needed during key phases:
JEM clustering [3] is based on Hazelcast.[4] Each cluster member (called node) has the same rights and responsibilities of the others (with the exception of the oldest member, that we are going to see in details): this is because Hazelcast implements a peer-to-peer clustering, so that there's no "master" node.
When a node starts up, it checks to see if there's already a cluster in the network. There are two ways to find this out:
If no cluster is found, the node will be the first member of the cluster. If multicast is enabled, it starts a multicast listener so that it can respond to incoming join requests. Otherwise, it will listen for join request coming via TCP/IP.
If there is an existing cluster already, then the oldest member in the cluster will receive the join request and checks if the request is for the right group. If so, the oldest member in the cluster will start the join process.
In the join process, the oldest member will:
Every member in the cluster has the same member list in the same order. First member is the oldest member so if the oldest member dies, second member in the list becomes the first member in the list and the new oldest member. The oldest member is considered as the JEM cluster coordinator: it will execute those actions that must be executed by a single member (i.e. locks releasing due to a member crash).
Aside the "normal" nodes, there's another kind of nodes in the cluster, called supernodes. A supernode is a lite member of Hazelcast.
Supernodes are members with no storage: they join the cluster as "lite members", not as "data partition" (no data on these nodes), and get super fast access to the cluster just like any regular member does. These nodes are used for the Web Application (running on Apache Tomcat,[5] as well as on any other application server).
Here is a diagram of the various nodes' statuses:
Node type | Status | Description |
---|---|---|
NODE | STARTING | in start up phase, node registers itself with this status |
INACTIVE | it's ready to take JCL to execute | |
ACTIVE | the job is running and the node is managing it | |
DRAINING | operators perform drain command to node, to block any processing, but the node was ACTIVE and a job is still running | |
DRAINED | operators block any processing of this node | |
UNKNOWN | when a node is no longer joined to cluster and its status is unknown | |
SUPERNODE | ACTIVE | supernodes are always active. Not possible to drain and start them |
UNKNOWN | when a node is no longer joined to cluster and its status is unknown |
The Execution Environment is a set of logical definition related to cluster which must be used to address the job to the right member to be executed. JEM implements 3 kinds of coordinates, used as tags, named:
Each node belongs to:
Each JCL can be defined to be run on:
JEM manages several queues used to maintain the life-cycle of a job: the queues are implemented using Hazelcast data sharing.
Here is the explanation:
When a job is moved into output queue, the submitter will receive a "job ended" notification (via topic).
In addition to a memory data sharing, one of the most important requirements for JEM is to use a global file system (GFS[6]). The main goal is to be able to store data on a common file system so that all jobs could manage them (reading and writing). Nevertheless, a GFS is not mandatory, if you desire to have all data spread on all machines and configuring JEM to have separate Environment, by specific Domains and Affinities.
Anyway, a GFS is suggested to be used to put the keys and keystores for encryption and licenses used by JEM.
Following folders should be configured:
Each of these paths should be mount in a shared file system (may be different shared file systems, one for each path if needed) so that all the nodes in the cluster will refers to files in the same way, will avoid redundancy and will always be up to date relative to the libraries versions, binary versions etc...
In this documentation, when we referred to the JEM GFS (global file system) we are referring to these paths.