00:08.37 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
01:03.39 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
01:31.36 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
01:49.08 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
02:28.49 | *** join/#tomcat BrianJ (~textual@pool-108-18-120-114.washdc.fios.verizon.net) |
04:45.07 | *** join/#tomcat Prezioso (~peter@85.218.165.129) |
05:09.45 | *** join/#tomcat zacce (~zacce@dsl-trebrasgw2-fe90de00-215.dhcp.inet.fi) |
06:05.07 | *** join/#tomcat factor (~factor@r74-195-218-112.msk1cmtc02.mskgok.ok.dh.suddenlink.net) |
06:12.27 | *** join/#tomcat lkoranda (lkoranda@nat/redhat/x-hvoqqyxudfvtdqbj) |
06:25.21 | *** join/#tomcat papegaaij (~papegaaij@5ee53fc2.ftth.concepts.nl) |
06:26.57 | *** join/#tomcat mturk (~mturk@41-193.dsl.iskon.hr) |
06:26.58 | *** join/#tomcat mturk (~mturk@redhat/jboss/mturk) |
06:55.51 | *** join/#tomcat opalka (~ropalka@84.64.broadband3.iol.cz) |
06:55.51 | *** join/#tomcat opalka (~ropalka@redhat/jboss/opalka) |
07:04.11 | *** join/#tomcat Mimiko (~Mimiko@77.89.245.38) |
07:05.35 | *** join/#tomcat internat (~nf@60-241-102-25.static.tpgi.com.au) |
07:41.21 | *** join/#tomcat Prezioso (~peter@85.218.165.129) |
07:56.56 | *** join/#tomcat _moon (~moon@LNeuilly-152-22-7-151.w193-251.abo.wanadoo.fr) |
08:54.07 | *** join/#tomcat zerobravo (~zerobravo@93-136-104-32.adsl.net.t-com.hr) |
10:38.23 | *** join/#tomcat _moon (~moon@LNeuilly-152-22-7-151.w193-251.abo.wanadoo.fr) |
10:44.38 | *** join/#tomcat Gletster (~Thunderbi@static-96-252-180-210.tampfl.fios.verizon.net) |
10:48.23 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
10:51.28 | *** join/#tomcat Gletster (~Thunderbi@static-96-252-180-210.tampfl.fios.verizon.net) |
11:14.21 | *** join/#tomcat yassine (~yassine@unaffiliated/yassine) |
12:39.29 | *** join/#tomcat noscript (~textual@p5B321815.dip.t-dialin.net) |
12:40.10 | *** join/#tomcat acidjnk22 (~havenone@pD9F86D27.dip.t-dialin.net) |
13:26.33 | *** join/#tomcat Goeland86 (~john@46.140.64.42) |
13:26.35 | *** part/#tomcat Goeland86 (~john@46.140.64.42) |
13:43.36 | *** join/#tomcat medthomas (~markt@minotaur.apache.org) |
13:43.37 | *** join/#tomcat medthomas (~markt@apache/committer/markt) |
13:49.42 | *** join/#tomcat gonglin_ (~gonglin@118.186.58.35) |
14:06.40 | *** join/#tomcat aniasis (~aniasis@64.124.202.222) |
14:25.42 | *** join/#tomcat Falados (~falados@207.86.141.138) |
15:56.25 | *** join/#tomcat caveat- (hoax@gateway/shell/bshellz.net/x-cmkayocbrlbjvxdh) |
16:11.25 | *** join/#tomcat sag47 (~derp@farcry.irt.drexel.edu) |
16:13.32 | sag47 | I have a war with a persistent connection pool to an oracle database. When I hot deploy will it release and reestablish the connection pool or will bad things happen? Basically what I'm asking is: Does using a connection pool make it so that I can't hot deploy and need to restart the tomcat server? |
16:24.38 | *** join/#tomcat cjz (~Adium@173-13-190-57-sfba.hfc.comcastbusiness.net) |
16:27.29 | whartung | no, that shouldn't be a problem sag47. Are you setting the pool up in tomcat or in your application? |
16:34.10 | *** join/#tomcat ianbrandt (~ianbrandt@99-111-99-153.uvs.sndgca.sbcglobal.net) |
16:34.35 | *** join/#tomcat ianbrandt (~ianbrandt@99-111-99-153.uvs.sndgca.sbcglobal.net) |
16:43.13 | sag47 | Hi whartung, the pool is set up within the war file only. No outside configs or libs are added during the hot deployment. |
16:45.19 | *** join/#tomcat dknox (~dknox@66.109.209.21) |
16:53.47 | whartung | then the pool will restart with the war I would think, right sag47 ? |
17:10.34 | *** join/#tomcat jasonb_ (d871a88d@gateway/web/freenode/ip.216.113.168.141) |
18:09.49 | sag47 | whartung: That's what I would think but I wanted to ask to be sure; being a persistent connection and all. |
18:10.06 | whartung | yea that's the trick |
18:10.41 | whartung | because if you are managing the connection pool yourself (i.e. your code is spinning it up etc) then it's up to your app to properly shut the connection pool down |
18:10.56 | whartung | if you're not doing that, then the pool is not shutting down cleanly. |
18:11.37 | whartung | now, whatever sockets and such it's leaving open are getting shut down inevitably, likely due to finalization during GC, you just have no control over that, and those are abrupt shutdowns. |
18:13.51 | whartung | so if you aren't specifically shutting down the connection pool, I'd add that to your code. |
18:13.56 | whartung | just for basic hygiene |
18:34.09 | sag47 | whartung: okay, thanks. How can I determine in the code that the jar is being undeployed? Whether I'm hot deploying or completely removing the jar? |
18:34.22 | sag47 | (remove that last question mark and make it a period) |
18:34.44 | whartung | how are you hot deploying, you mean redeploying the app without restarting tomcat? |
18:36.33 | sag47 | Yes, whether I'm simply updating the timestamp with touch (running apache-tomcat-6.0.24 on RHEL 5.5) or even overwriting with a new war version. Both instances hot deploy. |
18:38.43 | whartung | ok |
18:39.22 | whartung | when you do that, the container is going to first undeploy the running instance, right? then it'll restart your new one from the new artifacts. |
18:39.50 | whartung | There are two points in a WAR that you can interact with lifecycle of the application |
18:39.58 | whartung | the best one is a SessionContextListener |
18:40.11 | whartung | this gets called when the web app is being started and when it's about to be shutdown |
18:40.45 | whartung | the shutdown event is where you want to tell your connection pool that its going to be closed out. |
18:45.33 | sag47 | Okay, thanks. I'll talk with my java dev about it (I'm just a scripting sysadmin) but know a little java programming. |
18:51.05 | sag47 | whartung: as far as ungraceful connection pool shutdown. When the app is redeployed will it be able to open up another connection pool (without affecting application behavior)? |
18:51.53 | sag47 | These questions may seem odd but I'm currently troubleshooting an existing in-house application which is why I'm asking. Also, I'm curious :)/. |
18:52.14 | *** join/#tomcat lkoranda (~lkoranda@ip-78-102-114-196.net.upcbroadband.cz) |
18:52.35 | whartung | well here's what's going to happen, right? |
18:52.38 | whartung | for example |
18:52.43 | whartung | sayou have have 10 connections to oracle |
18:52.48 | whartung | and redeploy the app |
18:53.02 | whartung | with the abrupt shutdown, those original 10 connections may not be closed immediatly |
18:53.34 | whartung | so when the new pool starts up, if there's, say, some limit to oracle, it might not be able to get all of the connections it wants (since the other ones are now orphaned, but not quite closed) |
18:53.38 | whartung | so that's a possiblity |
18:55.31 | sag47 | Okay, that's what I figured. (i.e. upon redeploy there would then be 20 conns to the DB but the current app is only using 10 designated from the said pool). |
19:57.37 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
20:00.01 | *** join/#tomcat yassine (~yassine@unaffiliated/yassine) |
22:17.29 | *** join/#tomcat karega (~aniasis@64.69.4.11) |
22:24.25 | *** join/#tomcat factor (~factor@r74-195-218-112.msk1cmtc02.mskgok.ok.dh.suddenlink.net) |
23:02.08 | *** join/#tomcat acidjnk (~havenone@pD9F86D27.dip.t-dialin.net) |
23:11.32 | *** join/#tomcat JiYu (~jiyu@v3-1270.vlinux.de) |
23:18.04 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |
23:49.07 | *** join/#tomcat Electron (~Electron@CPE20aa4b1664bd-CM185933fe73ca.cpe.net.cable.rogers.com) |