Friday, June 08, 2012

OSB - Fault Handling in OSB

Fault Handling in OSB(https://svgonugu.wordpress.com/tag/service-callout/)

OSB - Publish, Routing and Service Callout


1. Service Callout
Used in real time request-response scenarios. Call a service in synchronous way. Being a synchronus call, the thread gets blocked until a response is returned from target service.


2. Publish
Used for Request only scenarios where you don't expect a response back. The nature of Publish action (sync or async) will depend upon the target service you are invoking.

  • Invoking an external service through a business service,  then Publish action with Quality of Service(QoS) as "Best Effort" (default) will work like fire and forget and thread won't get blocked (async call).
  • Invoking a local proxy service (proxy with transport protocol as "local") from another proxy using publish action then it would be a blocking call (synchronus call) and thread will get blocked untill the processing of local proxy finishes.
3. Routing
Only can be created inside a Route Node and Route Node is the last node in a request processing and  not in the pipeline, it passes all processing to another service (business or proxy). A Route Node indicates that Request Processing will end here and Response Processing will begin. You can not have any node after Route Node in the message flow.

Route Node can be seen as an action which defines where a Request thread stops and Response thread begins, in addition to what ever it does. It is know that by-design, request and response pipelines of a OSB proxy will be in different threads unless other wise configured.

Good Practice
Use service callouts for message enrichment or for doing message validation. Use route node to invoke the actual service for which the proxy service is a 'proxy'.

Try to design OSB proxy services in a standard VETO pattern - Validate, Enrich, Transform, rOute. Use Service callout for Validate and Enrich steps and use route node for rOute step. This approach looks more appealing design wise.

Reference:
Thread: compare Routing action versus Service Callout action versus Publish action?
Thread: Route node and Service Call out in OSB




OSB - Upgrade ALSB to OSB



1. XPath namespace prefixes
  <soapenv:Body xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="example.com">
   <urn:HouseNumber > 100</urn:HouseNumber>
    . . .
ALSB: select the ‘HouseNumber’ by XPath expression
  $body/ns1:HouseNumber

‘ns1′ is defined to be the correct namespace value “example.com”

OSB: When using very complex XSDs this is not garanteed to work in OSB!

Work Around: Ignore the namespace and use the local-name() function in an XPath expression

  $body/*[local-name()='HouseNumber']


2. XSLT validation in Workspace Studio
XSLT runtime prcessor:

  • ALSB - Xalan XSLT processor
  • OSB   - Saxon XSLT processor (Imports are no longer supported in Saxon.)



In OEPE, the XSLT file validated without any error, thus it still support imports. But the error occurre at runtime.


3. JAXWS
The webservice client upgraded from JAXRPC to JAXWS may not work properly.


References:


Upgrading ALSB services to OSB(http://www.capgemini.com/oracleblog/2011/06/upgrading-alsb-services-to-osb/)

Thursday, June 07, 2012

WebLogic - Work Manager VS Execute Queue



Execute Queue (Earlier Release before WLS 9.0)
  • Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks.
  • Each servlet or RMI request was associated with a dispatch policy that mapped to an execute queue.Requests without an explicit dispatch policy use the server-wide default execute queue.
  • Execute queues are always global.
  • Thread count should be set. Customers defined new thread pools and configured their size to avoid deadlocks and provide differentiated service. It is quite difficult to determine the exact number of threads needed in production to achieve optimal throughput and avoid deadlocks.
Work Managers
  • all Work Managers share a common thread pool and a priority-based queue. The common thread pool changes its size automatically to maximize throughput.
  • Work Managers become very lightweight, and customers can create Work Managers without worrying about the size of the thread pool. 
  • Thread dumps look much cleaner with fewer threads.
  • Possible to specify different service-level agreements (SLAs) such as Fair Shares or Response-Time goals for the same servlet invocation depending on the user associated with the invocation. The requests are still associated with a dispatch policy but are mapped to a Work Manager instead of to an execute queue. 
  • Work Managers are always application scoped. Even Work Managers defined globally in the console are application scoped during runtime. This means that each application gets into own runtime instance that is distinct from others, but all of them share the same characteristics like fair-share goals. 
  • Thread count does not need to be set. WebLogic Server is self-tuned, dynamically adjusting the number of threads to avoid deadlocks and achieve optimal throughput subject to concurrency constraints. It also meets objectives for differentiated service. These objectives are stated as fair shares and response-time goals.

When to Use Work Managers
Following are guildelines to help you determine when you might want to use Work Managers to customize thread management:
  • The default fair share is not sufficient. (This usually ocurrs in situations where one application needs to be given higher priority over another.)
  • A response time goal is required.
  • A minimum thread constraint needs to be specified to avoid server deadlock

Default Work Manager
To handle thread management and perform self-tuning, WebLogic Server implements a default Work Manager. This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors.

In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them.

You can override the behavior of the default Work Manager by creating and configuring a global Work Manager called default. This allows you to control the default thread-handling behavior of WebLogic Server.

Global Work Managers
You can create Work Managers that are available to all applications and modules deployed on a server. Global Work Managers are created in the WebLogic Administration Console and are defined in config.xml.

An application uses a globally defined Work Manager as a template. Each application creates its own instance which handles the work associated with that application and separates that work from other applications. This separation is used to handle traffic directed to two applications which are using the same dispatch policy. Handling each application’s work separately, allows an application to be shut down without affecting the thread management of another application. Although each application implments its own Work Manager instance, the underlying components are shared.

Application-scoped Work Managers
In addition to globally-scoped Work Managers, you can also create Work Managers that are available only to a specific application or module. Work Managers can be specified in the following descriptors:
  • weblogic-application.xml
  • weblogic-ejb-jar.xml
  • weblogic.xml
If you do not explicitly assign a Work Manager to an application, it uses the default Work Manager.

A method is assigned to a Work Manager, using the element in the deployment descriptor. The can also identify a custom execute queue, for backward compatibility. 

Work Managers and Execute Queues
This section discusses how to enable backward compatibility with Execute Queues and how to migrate applications from using Execute Queues to Work Managers.

Enabling Execute Queues
WebLogic Server, Version 8.1 implemented Execute Queues to handle thread management which allowed you to create thread-pools to determine how workload was handled. WebLogic Server still provides Execute Queues for backward compaitibility, primarily to facilitate application migration. However, new application development should utilize Work Managers to peform thread management more efficiently.

You can enable Execute Queues in the following ways:
  • Using the command line option -Dweblogic.Use81StyleExecuteQueues=true
  • Setting the Use81StyleExecuteQueues property via the Kernel MBean in config.xml.
When enabled, Work Managers are converted to Execute Queues based on the following rules:

  • If the Work Manager implements a minimum or maximum threads constraint, then an Execute Queue is created with the same name as the Work Manager. The thread count of the Execute Queue is based on the value defined in the constraint.
  • If the Work Manager does not implement any constraints, the the global default Execute Queue is used.

Wednesday, June 06, 2012

WebLogic - Session Monitoring Using WebLogic Diagnostics Framework (WLDF)

It is very helpful to know whether the HttpSession Attributes are Serializable or not. From the AdminConsole you can see the size of HttpSession). If it is negative value, that means the HttpSession contains some non-serializable attributes and your application is not good enough to be deployed on a clustered envs.

What should I do for WLDF?
1. Login to admin console and then create a diagnostic module
    AdminConsole-->Diagnostics-->Diagnostics Modules

  • By clicking New to create a new Diagnostic Module with a name you want, example: MyDiagModule and click OK to save it
  • By clicking  MyDiagModule and navigate to the configuration page of Diagnostic Module. Select the Instrumentation tab and check the checkbox of Enabled and save it
  • By clicking on Targets tab to set the target to cgServer (The target server you want to choose.)

2. Modify your application and add weblogic-diagnostics.xml to the META-INF of your application EAR file.
3. Deploy your application.
4. Login to admin console and fine your application underneath Deployment.
    AdminConsole-->Deployments-->YouApplication --> Configuration --> Instrumentation

  • By clicking your application name to get the setting page of your application
  • Select the Configuration tab and click on the Instrumentation tab, then check the checkbox of Enabled and save it.
  • By clicking the Add Moniter From Libery button, select  HttpSessionDebug and confirm it by clicking OK

NOTE: After done everything mentioned above, please restart the server and make sure it works

Then you can write some code to set some attributes of session, you will be able to see the payload size in the Events Log from AdminConsole.

AdminConsole--> Diagnostics--> Log Files--> EventsDataArchive

weblogic-diagnostics.xml

<?xml version="1.0" encoding="UTF-8"?>
<wldf-resource xmlns="http://www.bea.com/ns/weblogic/90/diagnostics">
<instrumentation>
<enabled>true</enabled>
<wldf-instrumentation-monitor>
<name>HttpSessionDebug</name>
<enabled>true</enabled>
</wldf-instrumentation-monitor>
</instrumentation>
</wldf-resource>




References for WLDF:
Instrumenting Weblogic Applications with WLDF (http://oraclemiddleware.wordpress.com/2012/02/01/instrumenting-weblogic-applications-with-wldf-where-does-the-application-spend-time/)
Instrumenting Java EE applications in WebLogic Server (https://blogs.oracle.com/WebLogicServer/entry/instrumenting_java_ee_applicat)
Session Monitoring Using WLDF (http://middlewaremagic.com/weblogic/?p=450)
Performing Diagnostics in a WebLogic environment (http://middlewaremagic.com/weblogic/?p=6016)



WebLogic - Datebase Connection Pinned-to-Thread



The follow info comes from Oracle documet 

Using Pinned-To-Thread Property to Increase Performance

To minimize the time it takes for an application to reserve a database connection from a data source and to eliminate contention between threads for a database connection, you can add the Pinned-To-Thread property in the connection Properties list for the data source, and set its value to true.
When Pinned-To-Thread is enabled, WebLogic Server pins a database connection from the data source to an execution thread the first time an application uses the thread to reserve a connection. When the application finishes using the connection and calls connection.close(), which otherwise returns the connection to the data source, WebLogic Server keeps the connection with the execute thread and does not return it to the data source. When an application subsequently requests a connection using the same execute thread, WebLogic Server provides the connection already reserved by the thread. There is no locking contention on the data source that occurs when multiple threads attempt to reserve a connection at the same time and there is no contention for threads that attempt to reserve the same connection from a limited number of database connections.

Note:
In this release, the Pinned-To-Thread feature does not work with multi data sources, Oracle RAC, and IdentityPool. These features rely on the ability to return a connection to the connection pool and reacquire it if there is a connection failure or connection identity does not match.

Changes to Connection Pool Administration Operations When PinnedToThread is Enabled

Because the nature of connection pooling behavior is changed when PinnedToThread is enabled, some connection pool attributes or features behave differently or are disabled to suit the behavior change:
  • Maximum Capacity is ignored. The number of connections in a connection pool equals the greater of either the initial capacity or the number of connections reserved from the connection pool.
  • Shrinking does not apply to connection pools with PinnedToThread enabled because connections are never returned to the connection pool. Effectively, they are always reserved.
  • When you Reset a connection pool, the reset connections from the connection pool are marked as Test Needed. The next time each connection is reserved, WebLogic Server tests the connection and recreates it if necessary. Connections are not tested synchronously when you reset the connection pool. This feature requires that Test Connections on Reserve is enabled and a Test Table Name or query is specified.

Additional Database Resource Costs When PinnedToThread is Enabled

When PinnedToThread is enabled, the maximum capacity of the connection pool (maximum number of database connections created in the connection pool) becomes the number of execute threads used to request a connection multiplied by the number of concurrent connections each thread reserves. This may exceed the Maximum Capacity specified for the connection pool. You may need to consider this larger number of connections in your system design and ensure that your database allows for additional associated resources, such as open cursors.
Also note that connections are never returned to the connection pool, which means that the connection pool can never shrink to reduce the number of connections and associated resources in use. You can minimize this cost by setting an additional driver parameter onePinnedConnectionOnly. WhenonePinnedConnectionOnly=true, only the first connection requested is pinned to the thread. Any additional connections required by the thread are taken from and returned to the connection pool as needed. Set onePinnedConnectionOnly using the Properties attribute, for example:
Properties="PinnedToThread=true;onePinnedConnectionOnly=true;user=examples"
If your system can handle the additional resource requirements, Oracle recommends that you use the PinnedToThread option to increase performance.
If your system cannot handle the additional resource requirements or if you see database resource errors after enabling PinnedToThread, Oracle recommendsnot using PinnedToThread.



Sombody wrote:
PinnedToThread would definitely improve the performance. Its main is to workaround threading limitations when you use JDBC Type 2 XA drivers.