PMDF System Manager's Guide
PMDF-REF-6.0
Previous
| Contents
33.1 Basics
It is important that you understand some of the basics of how PMDF
works. This section attempts to present a very basic overview.
When PMDF receives a message, PMDF writes the message as one or more
disk files in the PMDF queue directories. These files represent copies
of the message: at least one copy for each channel to which the message
must be enqueued. It is crucial that the received message be written to
a non-volatile medium such as a magnetic disk file: were the system to
crash before the message could be sent on to its final destination,
then the message might be lost. For this reason, PMDF always writes
received messages to disk before giving a positive acknowledgement of
receipt to the transmitter of the message. After a message is received,
an entry is made in the queue cache database.
Once PMDF has received a message, it attempts to deliver it. This is
done by submitting a processing job for each channel to which the
message is enqueued. These jobs attempt to send the message to wherever
it is next bound (as determined by PMDF's domain rewriting rules). If a
job is successful, then the message copy it was handling is deleted and
the corresponding entry removed from the queue cache database. If not
successful, then the message copy is left on disk for a subsequent
delivery attempt.
So, the normal mode of operation is: A message is received, it is
written to a file, a record is added to the queue cache database, a
processing job is started, the job reads the message file, the job then
deletes the file, the record is removed from the queue cache database.
Given this basic scenario, several things should be clear:
- Increased throughput may be realized by:
- decreasing disk write and read times,
- increasing the internal memory buffer size for processing jobs to
decrease the use of temporary buffer files for large messages,
- increasing the number of simultaneous processing jobs and any
resources they might require,
- decreasing processing job overhead,
- decreasing per message processing job overhead by increasing the
number of messages handled per job,
- ensuring that the number of SMTP server processes available for
accepting incoming SMTP over TCP/IP messages is appropriate for the
level of message traffic,
- for general SMTP over TCP/IP channels, used to send SMTP messages
to multiple different destinations, decreasing per connection overhead
for outgoing SMTP over TCP/IP messages by collecting and sending
messages to the same destination host in one connection,
- for
daemon
SMTP over TCP/IP channels, commonly used to
send SMTP messages to single specific relay systems such as mailhubs or
firewalls, using multiple threads for outgoing connections, and
- tuning the queue cache database.
- Using a virtual RAM disk for the message store is a very bad idea.
Should your system crash, mail may be lost or corrupted.¹
- Keeping the message store on a shadowed disk may hurt performance:
whereas shadowset reads are on the average faster, writes are on the
average slower. Since usually only one read will be required, the
decreased read time will not be sufficient to compensate for the
increased write time.
The suggestions in the first bullet item above, as well as several
others, are explored in the remainder of this chapter.
On OpenVMS, note that the queue cache database is an RMS keyed, indexed
file.
As such, it may be tuned using any of the standard RMS tuning tools.
However,
Innosoft has already put a lot of effort into tuning it. Should you
wish to
tune it differently from the FDL parameters in the file
PMDF_COM:queue_cache.fdl
, you might first consult with
Innosoft.
Note
¹ With certain provisos, however, storing
the queue cache database on a virtual RAM disk may be safe enough ---
see the discussion later in this chapter. And a reliable,
battery-backed, solid-state RAM disk may be safe enough for the message
store.
Previous
| Next
| Contents