If you are concerned that in certain scenarios, your PostgreSQL query may take exceptionally long to execute and overwhelm your DB and/or make your application or API incredibly slow, you may want to add a time limit to your query execution. This limit is referred to as the ‘statement timeout’.
The syntax is quite straightforward.
Within the .connect() function
You pass in an options argument, as shown below.
conn = psycopg2.connect(db_url, options='-c statement_timeout=300000') # timout in ms
In the above snippet, we have added a timeout of 300 seconds or 5 minutes. This means that if any query takes longer than 5 minutes to execute, it will timeout.
With this configuration, if your function exceeds the timeout, you will see an error message similar to the one below:
Traceback (most recent call last):
File "/var/task/src/models/lambda_raw_data/lambda_raw_data.py", line 34, in get_raw_data
psycopg2.extensions.QueryCanceledError: canceling statement due to statement timeout
As an env variable
You can also configure this as an env variable:
import os
os.environ['PGOPTIONS'] = '-c statement_timeout=1000'
With sqlalchemly
If you are using the sqlalchemy wrapper library, the option gets added in the create_engine function.
engine = create_engine(url,echo=False,connect_args={"options": "-c statement_timeout=55000"})
Found this post helpful? Then check out further posts on PostgreSQL on iotespresso.com. Also, follow IoT Espresso on Twitter to get notified about every new post.
i have the opposite escenario my connection with postgres crash in middle process and send error connection timeout.
the time when crash is random, every ejecution crash in diferentime
You may want to check your DB Credentials, and also make sure that your database server is up and running.
it is exceeding the execution if i mention 4000 for 4 seconds
In this case, the query might be taking longer to execute. Try increasing the timeout.