Even if I don't practice SQL, I have enough experience to see a very bad design, something to avoid at all cost.
- Because you use
current_datetime()
2 times, you can be unfortunate and have them on a different minute, hour or day which will silently make the query on a different lapse than expected.
- If the SQL compiler is unable to understand that you want the 2 boundaries as constant (which is not said) in the query, it will issue 2
current_datetime()
per record because
current_datetime()
is volatile. Not good either.
- If the SQL compiler is unable to remove the
DateTime()
from
DateTime(A.Timestamp)
, the server will be unable to leverage an index.
The correct way to do this is:
declare @start datetime;
declare @End datetime;
declare @DT datetime = current_datetime();
set @Start = @DT - ... ;
set @End = @Start + ... ;
Select A.ID, A.Timestamp, B.Package_quantity,
safe_add(A.Package_quantity, B.Package_quantity) as defect,
from
`tbl_ER020` A
INNER JOIN `tbl_ER220` B
ON A.ID = B.ID
where @Start <= A.Timestamp And A.Timestamp < @End
ORDER BY A.Timestamp ASC;
Why doing the query this way?
Your query runtime depend on the number of records of datatable.
My query runtime depend on the number of records of select result.
On a huge datatable, it makes a big difference.
[Update]
Read there:
PostgreSQL: Documentation: 9.1: Function Volatility Categories[
^]
Quote:
A VOLATILE function can do anything, including modifying the database. It can return different results on successive calls with the same arguments. The optimizer makes no assumptions about the behavior of such functions. A query using a volatile function will re-evaluate the function at every row where its value is needed.