I was recently invited to a meeting in advance of a maintenance window for one of the biggest applications at a customer. Until this moment I was not aware that I will be involved in the software upgrade and all the changes they will perform one and a half week later.
In this meeting, they told me that they will migrate the file storage (for pdfs) in the background to a new storage solution and they therefore have to perform some update statements. They told me that the first execution of the script on the integration environment took around 15 hours. That an update script which takes 15 hours is very critical for the whole upgrade is self explaining. There could be network interruptions and problems everywhere.
The script
I got the script at this day and started analyzing. They first have to create the script on a different environment. It contains around 11 millions of independent update statements. Every statement has the record ID hard coded in the WHERE clause. Which leads to hard parses for around 11 million statements. In combination with the execution over the network, this explains the duration of more than 10 hours.
1 2 3 4 5 6 |
UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 1; UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 2; UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 3; ... UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 11000000; COMMIT; |
Solution finding
Due to short time, I was not able to implement a real solution, like getting the data directly from the source system via database link and had to work with what I have.
Even if I would have been able to eliminate most of the network round trips by executing the script directly on the database server, there would be some further problems to overcome. The problem of session loss of my SSH session could have been solved by using tools like “screen”. Another possible way to resolve the network round trips would have been converting everything into PL/SQL by encapsulating everything into a BEGIN and END block.
1 2 3 4 5 6 7 8 |
BEGIN UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 1; UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 2; UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 3; ... UPDATE TABLENAME SET FIELD1= 'NEW_VALUE', FIELD2 = 0, FIELD3 = 'NEW_VALUE', FIELD4 = DATE , FIELD5 = 1 WHERE id = 11000000; END; / |
The workaround
All the mentioned workarounds would not eliminate the hard parse problem and would not be a way I’m happy with. Without getting rid of the update statements it would be almost impossible to reduce the execution time by a high percentage. After I talked with a colleague, I had the idea of not executing the mentioned update statements and not using the provided SQL script.
I asked the provider of the SQL script if it would be possible to provide a CSV file instead. The effort for him was almost the same. He only had to change the output from concatenate the elements to an update statement to a CSV file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
CREATE DIRECTORY DIRECTORY_OF_EXTERNAL_TABLE AS '/tmp/migdir'; CREATE TABLE MERGE_EXT ( FIELD1 VARCHAR2(50), FIELD2 NUMBER(1), FIELD3 VARCHAR2(50), FIELD4 DATE, FIELD5 NUMBER(1), ID NUMBER(10) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY DIRECTORY_OF_EXTERNAL_TABLE ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY ',' MISSING FIELD VALUES ARE NULL ( FIELD1, FIELD2, FIELD3, FIELD4 date 'yyyy-mm-dd', FIELD5, ID ) ) LOCATION ('new_values.csv') ) REJECT LIMIT UNLIMITED; |
Once I received this file I was able to to create an external table out of this file. With this external table, I was able to create one simple MERGE statement to do all the 11 million updates.
1 2 3 4 5 6 7 8 9 |
MERGE INTO update_table upd USING MERGE_EXT mig ON ( upd.id= mig.id ) WHEN MATCHED THEN UPDATE SET upd.field1= mig.field1, upd.field2= mig.field2, upd.field3= mig.field3, upd.field4= mig.field4, upd.field5= mig.field5; COMMIT; |
On the test environment, the MERGE statement took around 45 minutes. This was way better than 15h but I was pretty sure it would be even faster on the production . The production server has more memory, more CPU cores and during the maintenance window, nobody will work on this system. During the maintenance window the MERGE statement took less than 3 minutes.
Conclusion
This solution could be optimized far further but a decrease of execution time from 15h to less than 3 minutes was enough for fighting the fire in this situation.
As the title states, it was firefighting and we only had roughly a week of time to develop, test and verify the solution.
There are many ways to get the same result but most of the time, the easiest one is letting the database do the job. We gave the database the opportunity to do what the database can do best. With this, we were able to shorten the downtime and find the mentioned errors.