Blog do projektu Open Source JavaHotel

czwartek, 14 czerwca 2012

JSPWiki in the Cloud and security

I've just  deployed a new version of JSPWiki in the Cloud (new sources are commited also) and it seems that authentication is working. User can login and logout.
But what seemed to be quite easy at the beginning ended up with a huge refactoring and rebuilding.
Firstly I had to remove an error which caused me a lot of headache. It worked in development environment but failed after deploying to production server with nice looking error message like:

java.lang.IllegalStateException: WRITER
 at org.mortbay.jetty.Response.getOutputStream(
 at com.metaparadigm.jsonrpc.JSONRPCServlet.service(

What was more difficult this error popped up in some specific scenario. The same execution path first time worked as expected but next time failed. Because debugging in production server is not possible the only way to find a bug is adding next and next trace messages.
Finally I was able to find a malicious code in It seemed that this code:

tried to write after response was commited. So the solution was to enclose that code with something like:
if (!response.isCommitted()) {
But I still do not understand why this code worked in the main trunk of JSPWiki and why the execution path worked for the first time and the next time the same execution path failed.

But after overcoming this problem the next appeared:

Uncaught exception from servlet
java.lang.IllegalArgumentException: Task size too large

It seemed that size of data assigned to the session grew too large. The only solution was to reduce drastically the size of data being persisted with the session. So I decided to make class (the main culprit) scoped for the request only and class (with user credentials attached) to be session scoped.
Next step was to rebuild which kept a lot of data as static. In the cloud environment dynamic data cannot be persisted as static because nobody guarantee that next request will be executed in the same JVM.
So finally I made a lot of classes as a request scoped and put them under the Spring control as a bean. But I discovered that it did not go quite easy because of mutual dependencies between them. So I had to spend a lot of time trying to understand and untangle this dependencies.
But finally it seems working. The main problem now is to improve performance because almost everything is initialized at every request. Another problem is to reduce number of reads from the datastore (the same data is read several times in one request) by introducing a cache local to the one request and shared between requests by using Google App Engine mamcache.

Brak komentarzy:

Prześlij komentarz