I have a set of 5 queries in place to refresh a reference table used in a database -- a customer account list. The source data for the table is maintained separately, and each week we download the current table into our database. For operational purposes, if an account is cancelled for whatever reason, it is deleted from the customer master, thus preventing any further records being generated for the customer number. In a forward-looking system this works, but our database tracks historical information, so we want to keep all current and past customers in our table.

The refresh process we have at work so far works like this:
  1. <LI>Copy existing reference table to a temporary holding table
    <LI>Purge all data from the existing reference table
    <LI>Append new data into the reference table
    <LI>Append old data from the holding table into the reference table
    <LI>Purge the holding table
The net result is that all of the current information gets placed into our table, and then any records from the old table that are no longer in the new table are also added. The customer number is used as the primary key to prevent duplicates.

Is this really an efficient way to accomplish the task? An "Update" query might be able to work, but since any or all fields can be changed for any record, such a query might easily get over-complicated, and possibly run even slower than the 5-step process above.

The current table is ~1200 records, 16 fields per record.