Maybe I was not clear enough.
I need to buffer data, ensuring persistence, allowing a producer to
share its produced data with a consumer. - The producer is a thread
which listens for GPS data and store them into a buffer
Understood.
- The consumer is a thread which reads data stored into the buffer for
sending them to a remote host
Is there a particular reason for using threads? This would, IMO, be
easier to code and test as two independent processes seeing that there is
apparently no interaction between your GPSListener and Consumer threads.
Is the Consumer doing anything apart from:
while (data-item in buffer)
send data-item to remote host
if (ack received from host)
delete data-item from buffer
If that's all its doing then, again IMO, you don't need the Consumer
process if you put the data in a database table that's treated as a FIFO
queue and let the remote process fetch data directly from it. Doing this
doesn't affect the logic in the remote process. It still needs a read
loop regardless of whether its firing off SQL SELECTs or accepting data
from your Consumer process.
By eliminating the Consumer you also remove the need to design, implement
and test the message handling protocol you'll need to move data from
Consumer to the remote process. Bear in mind that using a simple
unidirectional data stream largely cancels the data security provided by
the cache: unless the remote process acknowledges each data item as its
received and processed how can you know when its safe to delete an item
from the cache or, after a major failure, where to restart without data
loss or duplication?
If you use transactional SQL to fetch the data, you get everything I
described above for free. The more I think about what I understand to be
your requirements the more likely it is that this is how I'd do it.
Of course you could also use IBM's Websphere MQ. This is message handling
middleware which provides caching, transport and restart/recovery
facilities. However, its not free and may well be OTT for your
requirement.