On 8/27/2014 11:17 AM, Scott Gray wrote:
> I don't have a problem with multi-threaded transactions, I'm not sure under what situations I would use it but I'm not against it. I have a better understanding now of why you don't want to use ThreadLocal, thanks. In our current design, an incoming request is assigned to a thread and that thread executes the entire request/response lifecycle. During that lifecycle, the thread could be spending a lot of time waiting for disk I/O or database I/O. That is a very real possibility and not a hypothetical one. For example... When I created my multi-threading data loader, I knew the database was the bottleneck, and the single-threaded data loader was spending most of its time waiting on the database. So, my solution was this: Have the main thread parse the various XML files into Java objects (as it currently did), but instead of performing operations on those Java objects (create tables, indexes, etc) put those objects in a queue - where OTHER threads can perform operations on them. It was a multi-threaded Provider-Consumer design and it worked well. That is why Adam said it was a lot like SEDA. But the difference in SEDA is in the feedback loop - where queue parameters are monitored and adjusted dynamically to maintain liveness. That feature was not necessary in my scenario. Getting back to the request/response lifecycle... If we assume the request thread is blocked waiting for slow I/O, then the potential exists that new requests will be blocked - unless we allow more threads. But wait! More threads might cause problems. There is a tipping point where adding more threads will slow things down - due to thread maintenance overhead. (In my experience, anything more than 2*CPU threads in OFBiz will have no effect or will slow processes down.) So, more threads are allocated to avoid blocked requests, but those new requests are going to have horrible response times because the server is busy taking care of the overhead of all of those threads and not doing any real work. Users get frustrated waiting for a response and click the refresh button - generating more requests, and the situation escalates. SEDA solved that problem by having only one thread to service incoming requests. The thread has only one task - drop the request in a queue. Additional threads service the queue. A feedback loop monitors the queue parameters and if it appears the server is becoming overloaded, new requests are rejected, or a quick response is sent indicating a busy server (short-circuiting the normal execution path). Instead of having one thread per request, you have a finite number of threads that are closely monitored and adjusted dynamically for optimum performance. (This is an over-simplification of the SEDA design, but it gives the general idea.) What does all this have to do with OFBiz? I'm not sure. The concept is intriguing and I think there might be an application for it in OFBiz. One area where I applied a SEDA concept to OFBiz is with the metric feature. If a particular request URL response time exceeds a threshold, you can respond with an alternate view - maybe a server busy message, or a normal page but with reduced capabilities. Getting back to Scott's question... Let's say that 20 requests arrive at the same time or very close together - triggering 20 threads. Okay, that means we have 20 threads executing service engine code, entity engine code, rendering code, etc... But we really don't need 20 threads. We could have one thread executing service engine code and rendering code, and maybe 5 threads executing entity engine code (where more threads are truly needed). A design like that requires Delegator instances and the transactions they are running in to be handed off from one thread to another. Adrian Crum Sandglass Software www.sandglass-software.com |
Free forum by Nabble | Edit this page |