Thursday, February 6, 2014

deadlock in file rotation

configuration : wls 10.3.6 + coherence + OSB

Situation : all logs are being written to one file.


Below is the chain of events

ExecuteThread: '9'
Blocked trying to get lock: weblogic/work/CalendarQueue@0x2913e168[thin lock]   <--1
     ^-- Holding lock: weblogic/logging/LoggingPrintStream@
0x2a2a1f68[thin lock]      at sun/nio/cs/StreamEncoder.implFlush([inlined]    <--4

"weblogic.cluster.MessageReceiver"   id=94 idx=0x158 tid=14612 prio=5 alive, blocked, native_blocked, daemon   
Blocked trying to get lock: weblogic/logging/FileStreamHandler@0x29bddfd8[thin lock]                                           <--2
Holding lock: weblogic/work/C****Queue@0x2913e168[recursive]     <--1
Holding lock: weblogic/work/C****Queue@0x2913e168[thin lock]

"Timer-2"   id=25 idx=0x58 tid=12819 prio=5 alive, blocked, native_blocked, daemon   
Blocked trying to get lock: com/bea/logging/StdoutHandler@0x23dc37f0[thin lock]  <--3
Holding lock: weblogic/logging/FileStreamHandler@0x29bddfd8[recursive]     <--2
at com/bea/logging/RotatingFileOutputStream$   
Holding lock: weblogic/logging/FileStreamHandler@0x29bddfd8[thin lock]

"Timer-5"   id=40 idx=0x88 tid=13405 prio=5 alive, blocked, native_blocked, daemon   
Blocked trying to get lock: weblogic/logging/LoggingPrintStream@0x2a2a1f68[thin lock] <--4
Holding lock: com/bea/logging/StdoutHandler@0x23dc37f0[recursive]     <--3
at com/bea/logging/StdoutHandler.publish(   
Holding lock: com/bea/logging/StdoutHandler@0x23dc37f0[thin lock]

Circular (deadlocked) lock chains
Chain 2:
"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=26 idx=0x5c tid=3592 waiting for weblogic/logging/FileStreamHandler@0xaa22cbb8 held by:
"Timer-2" id=24 idx=0x54 tid=3589 waiting for weblogic/work/C****Queue@0xa88491c0 held by:
"ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=26 idx=0x5c tid=3592 

Fix : disable the below in the console - server
Redirect stdout logging enabled
Redirect stderr logging enabled
& redirect coherence logs to separate file 

In a nutshell write individual logs seperately.

or apply the patch17070169 for 10.3.6

Tuesday, January 28, 2014

Provide user with monitor role access to view JMS messages (via console & WLST)

1. enable JMX policy editor
Login to console - security realms - myrealm -configuration - general - enable :Use Authorization Providers to Protect JMX Access - save - activate changes - restart

2. create user with monitor role
Login to console - security realm - myrealm - users&groups - users-new - create new user -save - click on that user again - groups - select monitor on the left table and move it to right - save -

3. create policy
Login to console - security realm - myrealm - roles & policies - Realm Policies - JMX Policy Editor - global scope - next - - JMSDestinationRuntimeMBean - next - Operations: Permission to Invoke - create policy -add conditions - Predicate List: user - next -
type your user and add - finish - save

4. now login to console as user (with monitor role) and try reading a message


 For granular approach of specific permission of get messages only:

cmo.createPolicy('type=<jmx>, operation=invoke, application=,, target=getMessages','{Rol(Monitor)}')

For a broader permissions remove the targets:

cmo.createPolicy('type=<jmx>, operation=invoke, application=,','{Rol(Monitor)}')

I was able to get the resource details by enabling audit logging

Wednesday, January 15, 2014

java.lang.OutOfMemoryError: GC overhead limit exceeded

Issue : java.lang.OutOfMemoryError: GC overhead limit exceeded in jvm logs
details : Sun JDK 1.6, parallel collector GC,
solution : apply the param in java_options -XX:+UseGCOverheadLimit     
It uses a policy that limits the proportion of the machines time that is spent in GC before an OutOfMemory error is thrown.
however this parameter will not avoid outofmemory at a later stage.