comfortable thread pools using java.util.concurrent

Today I had another time the problem to execute many short tasks in a thread pooled environment. One way could be to write the pooling itselfs (like the other times in history ;)). But I thought: Why to reinvent the wheel every time and not using standardized libraries like the java.util.concurrent.*?

What I want:

  • A self enlarging and shrinking ThreadPool depending on the queued tasks. (Because sometimes it is nothing to do and sometimes it could be to get many hundreds of tasks in a short time)
  • A definable Limit of maximum running Threads (Because we cannot create hundreds of threads just for this task. Keep in mind: the tasks have a short run time)
  • Queuing the Tasks to run later, if the ThreadPool has reached the given limits

So I looked at the given Examples and the Executors-Class, which provides fast and simple methods:

  1. newCachedThreadPool : This looks good, but unfortunately you can’t define a maximum limit of threads.
  2. newFixedThreadPool : This handles the wanted limitation, but not the automatic enlarging and shrinking.

So we cannot use the predefined methods and need to instantiate ThreadPoolExecutor directly.


Really caching web content

Today I tried to optimize a web application to speed up the usage. One big and low hanging fruit is the correct caching of static files like images, css- or javascript-files. There are two parts you can save with caching. And only the correct usage will really save time and bandwidth.

1. Caching content with live time revalidation

Every request of a http client consist of two parts; the request and the response. In the request defines the browser, which resource is needed. The response of the server then is the requested resource. One caching-strategy is to slim the response. So you will save bandwidth and on slow connections also time. To achieve this you have to use the ETag and / or Last-Modified header of the HTTP protocol.


If the browser then needs the requested resource again, it will send the If-None-Match and / or If-Modified-Since request header. Then the server can decide, if the resource has changed or not. If not, it will send the 304 response result. But what if we know on the first request, that the content is safe for the next x minutes / days? In this case we could also save the request. Imaging you have 100 pictures on one site and have a ping-time of 100 ms to a server. It would take in sequential mode 10 seconds to check these URLs.

2. Caching content with expiration dates

To give your content a time range, where it is valid, you have to define an expiration date using the Expires header. Additional you should enable caching itself for a time range using the Cache-Control header. The Cache-Control header can have several values, which can be combined. Typically would be:


The last option defines, that the client have to re-request the server for the resource after the time “max-age”, if the resource is needed again. Unfortunately the Safari browser have a bug which results in ignoring the Expires and Cache-Control header under some circumstances. As Steve Clay wrote on his blog, the problem belongs to the definition of must-revalidate. So currently using must-revalidate is no good idea until the bug is resolved.

To easily find the resources with missing Expires headers, you can use YSlow, a Firefox plugin provided by Yahoo.