19:45.21 | *** join/#tomcat ibot (~ibot@rikers.org) |
19:45.21 | *** topic/#tomcat is Stable versions: 7.0.29, 6.0.35, and (EOL'd) 5.5.35. Newbies use the official binary from tomcat.apache.org, an RPM from http://code.google.com/p/webdroid-tomcat-package , or the latest tomcat deb. Check logs before asking a question. SLOW MOTION CHANNEL: Ask your question including your TC, Java, & OS versions, then wait; check back for answers. |
19:45.24 | *** join/#tomcat jamespage (~jamespage@2001:41c8:1:57c2::10) |
19:45.24 | *** join/#tomcat jamespage (~jamespage@ubuntu/member/jamespage) |
20:02.05 | *** join/#tomcat yassine (~yassine@unaffiliated/yassine) |
20:02.29 | whartung | wild_oscar: was out to lunch, sorry. |
20:02.42 | whartung | There's a temp directory you can get from the Servlet Context i think. |
20:02.52 | whartung | that's the only guaranteed file path in the servlet spec. |
20:20.30 | jasonb | wild_oscar: No, not true. Tomcat primarily supports deploying unpacked webapp directories. Supporting packed WAR files also works because of the large amount of developer time put into supporting that as well. But, nothing prevents you from doing hot (re)deployment and autodeployment (no hand-configuration) in the form of unpacked webapp directories. |
20:35.52 | wild_oscar | jasonb: just read what you said. what is not true? |
20:37.03 | wild_oscar | btw, I posted the questions to the ML. had a couple of answers suggesting using the temp dir; not about why the change from 6 to 7 regarding the exploding of wars outside appBase |
20:39.03 | whartung | wild_oscar: his point is that you can deploy a pre-exploded app and make it as "hot hot hot" as you want. |
20:40.15 | wild_oscar | yes, but we didn't say otherwise |
20:41.13 | whartung | no, but he was simply mentioning it to suggest that auto deployment could still work for you. |
20:41.30 | whartung | but I, like you, am interested why auto-explosion isn't…um…auto. |
20:44.05 | jasonb | wild_oscar: I was saying that your quote wasn't true: "if you want auto deployment you can't have your war exploded" .. but just now while re-reading your exact wording, I realized that what you said is ambiguous.. and that I'm probably disputing the other meaning of your words. :) |
20:44.39 | wild_oscar | jasonb: does the mailing list post explains it better? |
20:45.01 | jasonb | I haven't seen that, and I can't look at the moment. |
20:45.13 | jasonb | Hopefully, our book explains it pretty well. :) |
20:45.33 | whartung | what book? |
20:45.44 | jasonb | Tomat: The Definitive Guide (O'Reilly) |
20:46.17 | whartung | wild_oscar: do you have a nabble (or similar) link to your email? |
20:46.31 | wild_oscar | 2 secs |
20:46.53 | wild_oscar | http://tomcat.10.n6.nabble.com/Tomcat-7-Why-can-t-you-use-automatic-deployment-and-exploded-WAR-when-docBase-is-outside-appBase-td4984890.html |
20:47.51 | wild_oscar | whartung, jasonb ^^ |
20:48.02 | whartung | I always enjoy folks criticizing you design rather than answering the question... |
20:49.22 | whartung | as for the difference between java.io.tmpdir and the servlet tempdir, arguably, the servlet one is the one that the container has some control and guarantee over in terms of granting it to you (i.e. it "knows" you can write to it, perhaps). |
20:54.07 | jasonb | Each webapp may want to write to the same file path relative to the root of the tmpdir, so if you gave 2 or more webapps /tmp as their temp dir, they may read/write each others files and potentially misbehave. So, the servlet spec has a different design where each webapp gets its own temp dir to solve that (and potentially other) problems. |
20:54.39 | whartung | yea, that sounds even better than mine. |
20:54.53 | wild_oscar | aha! yeah, that I understand really well :D |
20:55.00 | wild_oscar | and it makes sense |
20:55.26 | jasonb | I'm not sure whether there's a real deployment behavior difference between Tomcat 6 and Tomcat 7 w.r.t. keeping the webapps outside of appBase.. or whether you're perceiving a difference because the doc pages are different. |
20:55.30 | wild_oscar | I assume tomcat also sets that (as the last poster writes Tomcat (catalina.sh) sets java.io.tmpdir to $CATALINA_BASE/temp for your convenience by default. " |
20:55.30 | jasonb | I actually haven't tested that. |
20:56.26 | whartung | yea, wild_oscar, perhaps the doc change is a clarification of past behavior. |
20:56.38 | whartung | so it's a documentation change rather than a functionality change |
20:56.41 | jasonb | Yeah, I think Tomcat's startup scripts set java.io.tmpdir, and if not then something in the first classes that run sets it.. but I'm pretty sure the startup scripts do. |
20:56.46 | wild_oscar | jasonb: I haven't created a testcase. only thing is really the fact that I'm deploying to two different servers and they're behaviour is different |
20:57.01 | wild_oscar | eek! "their" |
20:57.06 | jasonb | okay. Then there's likely to be a real behaviour difference. |
20:58.16 | jasonb | Also, I'd recommend you actually stop using .war files. If you used only unpacked webapp directories instead, you wouldn't have this issue. |
20:58.26 | wild_oscar | with the caveat that they're both distro re-packages - two different versions of ubuntu. but still... |
20:58.47 | jasonb | Yeah, that's why I said "likely". |
20:59.22 | whartung | funny, all I use it war files... |
20:59.27 | whartung | I find them tidy |
20:59.36 | jasonb | .. and troublesome. :) |
20:59.36 | whartung | *is war files... |
20:59.51 | whartung | nah, not so much -- never really had an issue with them. |
20:59.59 | jasonb | You do right now. |
21:00.24 | jasonb | And, every time you pack one and unpack one, you're waiting for a bunch of zip and unzip work to be done that was all unnecessary. |
21:01.19 | jasonb | You're also waiting while the whole zip file transfers to your server before you can unzip it and use it.. and the waiting is much longer than it should be to transmit just the changes of a new copy of the webapp. |
21:01.58 | wild_oscar | jasonb: they're a lot more practical to share, though. re-deploys of the app are a simple scp of the war file, which was created in the CI server |
21:02.01 | jasonb | And, there are cases where you want to just modify one file, or a couple of files on the server, and every time you unpack the zip again it blows away your changes.. that was also unnecessary. |
21:02.54 | wild_oscar | how do you handle that? in the case you want to transmit the app over the web to some server? |
21:03.45 | jasonb | The CI should be doing continuous deployment so that it isn't manual work for you to do.. and if the CI is doing it, then the CI can invoke rsync to deploy just the diffs each time, causing far less transfer time, and that would result in it being deployed much faster to the server.. all without you needing to do anything. |
21:04.09 | wild_oscar | that's the main advantage I see. and also the fact that I have *a file*, so it's easy to version the application (in case you need to have close control over that, and say "*this* is version X") |
21:04.36 | jasonb | Directory names may also have version information in them. |
21:05.19 | jasonb | You can probably tell I've spent a little time thinking about some of this stuff. :) |
21:05.50 | wild_oscar | jasonb: I'll take that into consideration for the future. would it behave just the same in terms of context file? ie, would you point docBase to a directory (rather than a war)? |
21:06.38 | jasonb | Yes, point the docBase to your webapp dir, and it's as simple as that. No unpacking ever happens, it just deploys immediately. |
21:06.57 | wild_oscar | I can see that! :) would do create an md5 of the directory as well - to ensure nothing has changed? I work for the pharma industry, they're a bit picky about that sort of stuff |
21:09.10 | whartung | I don't really want to debate the relevance of wars. *I* have never had much issue with them (wild_oscar is certainly having some presently), and I find sending a single large compressed file vs 1000's of small files to be more efficient -- in my experience. For fast turn arounds with lots of JSP work, I have rsync-ish scripts to sync up with my local server for quick turn arounds. |
21:09.44 | whartung | Majority of my work is back end and inevitably requires a redeploy anyway. |
21:27.46 | jasonb | wild_oscar: Some checksum software is able to generate a checksum for a directory's contents. If you don't have something that will do that, you can always create a checksum file inside the directory, and compare the modification timestamps, and/or put the checksum in the dir filename itself. |
21:58.09 | *** join/#tomcat oconnore (~eric@38.111.17.138) |
21:58.20 | oconnore | Hi, I'm using Tomcat7's connection pool to connect to Postgres. I end up having 95 open connections, all but 3 of which are idle. I then get the error "FATAL: remaining connection slots are reserved for non-replication superuser connections". Any ideas? |
21:59.40 | oconnore | (org.apache.tomcat.jdbc.pool) |
22:08.11 | *** part/#tomcat wild_oscar (~malmeida@bl11-126-227.dsl.telepac.pt) |
22:18.58 | whartung | how big is your pool? |
22:19.12 | oconnore | whartung: 100, with 5 reserved |
22:19.43 | whartung | that's why. it's opening them all up. and it sounds like your server needs to be tweaked to accept more connections. |
22:20.42 | oconnore | whartung: Yes, that's the problem. It's opening too many, and not closing/recovering idle connections. |
22:21.11 | whartung | I can't really speak to that, but one of the tenets of a connection pool is to hold open connections. |
22:21.48 | oconnore | whartung: yes, but if there is an idle connection available, why is the pool not returning it? |
22:22.10 | oconnore | it's holding on to 92 idle connections and not letting anyone have them. |
22:22.31 | oconnore | that doesn't seem like a legitimate purpose of a connection pool. |
22:22.33 | whartung | sounds more like Postgres is not letting the connection pool have them. |
22:22.49 | oconnore | Postgres reports them idle |
22:22.51 | whartung | tomcat doesn't really have an idea of a "super user" |
22:23.01 | oconnore | right |
22:23.34 | whartung | but it sounds like postgres is the one complaining. You're noting that tomcat is continuing to open new connections rather than perhaps reusing existing ones. |
22:24.12 | whartung | " The maximum number of connections that should be kept in the pool at all times. Default value is maxActive:100 Idle connections are checked periodically (if enabled) and connections that been idle for longer than minEvictableIdleTimeMillis will be released. (also see testWhileIdle)" |
22:24.24 | whartung | so, by default maxIdle == maxActive |
22:25.03 | whartung | so looks like it's working, just not necessarily what you're expecting |
23:05.09 | oconnore | whartung: I reduced maxActive and the error still occurs, now at exactly maxActive connections. |
23:05.43 | whartung | how many connections do you have configured on Postgres? |
23:06.00 | oconnore | whartung: I reduced it to 90, and now I get the error at 90 open connections |
23:06.16 | whartung | your reduced postgres to 90? |
23:06.25 | oconnore | no, postgres is still 100 with 5 reserved |
23:07.11 | whartung | how are you testing this? |
23:08.16 | oconnore | whartung: just normal development. I compile a .war, undeploy from the /manager, and then redeploy. Occasionally everything dies because of the connection limit. |
23:08.53 | whartung | because you're hitting a postgres limit here, not a tomcat limit. why are you developing with 100 connections? |
23:09.04 | oconnore | whartung: I am hitting the tomcat limit now |
23:09.07 | oconnore | not the postgres limit |
23:09.15 | whartung | what error is tomcat giving you? |
23:09.20 | oconnore | the same one as before |
23:09.31 | oconnore | but at 90 connections (the tomcat limit) |
23:09.32 | whartung | that earlier error was a postgres limit |
23:10.01 | oconnore | I set a limit in tomcat for 90 connections, and now I am hitting the error at the 90 connection limit |
23:10.12 | oconnore | that implies that it is a tomcat limit, no? |
23:10.16 | whartung | no |
23:10.36 | oconnore | uh, did postgres magically infer the new setting? |
23:10.47 | whartung | like I said, tomcat doesn't know anything about "super users", it doesn't care, so it certainly wouldn't "reserve" connections for one. |
23:10.57 | whartung | what happens when you use 10 in tomcat |
23:11.18 | oconnore | whartung: but how on earth would postgres use the tomcat limit? |
23:11.47 | whartung | it wouldn't, but you're still borderline near it -- you might have PGADmin open and use up more connections, who knows. |
23:12.09 | oconnore | i don't, trying 10 now |
23:12.42 | oconnore | ok, restarted the server, 1 active connection for debugging |
23:12.53 | whartung | tomcat or postgres? |
23:13.57 | oconnore | tomcat is set to maxActive=10 |
23:14.01 | oconnore | postgres is the same as before |
23:14.22 | whartung | when you kill tomcat I assume all the connections died on pg. |
23:14.51 | oconnore | i killed postgres, not tomcat, but yes, all the connections died |
23:15.17 | whartung | ok…I never kill postgres..I can't even think when I last restarted PG save when the machine boots. |
23:16.29 | *** join/#tomcat factor (~factor@74.196.174.25) |
23:21.10 | oconnore | whartung: oh, I think I just found it. I did not have maxIdle set, only maxActive. maxidle defaults to 100, which exceeds my postgres limit |
23:21.22 | oconnore | I confused maxIdle with maxActive |
23:22.06 | whartung | I would like to hope that if maxIdle > maxActive, maxIdle would == maxActive…. |
23:22.10 | whartung | but, "good!" |
23:22.12 | whartung | :) |
23:22.16 | oconnore | thank you for talking with me :) |
23:22.22 | whartung | you bet |
23:39.15 | oconnore | whartung: erg, it just blew up again "FATAL: remaining connection slots are reserved for non-replication superuser connections" |
23:39.35 | oconnore | 91 open connections, maxActive, maxIdle, minIdle all 10 |
23:45.02 | *** join/#tomcat macIvy (~macIvy@ip70-180-159-214.lv.lv.cox.net) |
23:51.44 | whartung | yea wow hmm |
23:55.43 | whartung | oconnore: that certainly doesn't make muchsense |