For a long time, this is one of the well known issues in computer engineering to “evict the cache” and regenerate it. I too face it time to time.
Recently I got a requirement that when a table gets a new row inserted. the metadata cache must update in all the running instances that are connected to that database.
My first thought about it was simple, when we issue an insert at that time, we will make a service call after committing the transaction that will refresh the cache. But this only updates at single pod and also there is no guarantee that the table will be updated only via application. A developer can manually update it or write a patch in run it.
The second thought of using a scheduled task that will poll the table time to time and update the cache. This will work for sure, and also easy to implement. easy to give justifications but too many database calls is also not good. If I increase the interval, there will be lag in detection, if not then too many calls.
I started hunting for some event based mechanisms to handle this scenario. I know that there is ‘pg_notify’ in PostgreSQL , it is possible to have something similar in our database as well.
I found three approaches.
1. UTL_HTTP
https://docs.oracle.com/cd/A97385_01/appdev.920/a96612/u_http.htm
2. DBMS_AQ
https://www.oracle.com/in/database/advanced-queuing
3. CQN
https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/cqn.html
After comparing the pros and cons of these techniques I am going with CQN to get a notification when a table is updated, I will listen on the results changed model and issue a refresh command when this occurs.
Happy coding.