First of all, time measurement depends on the measurement tool used. The classic tool for developing T-SQL code is still Microsoft SQL Server Management Studio (SSMS) . By default, this displays a small stopwatch in the development environment that measures in hours, minutes and seconds. So if a query runs for less than a second, the developers see the display 00:00:00. This display then leads to the question of what else can be optimized? There are various approaches to obtaining more granular values, which can then be in the millisecond range, but is this actually of any use?
As has been emphasized several times, developers only have limited influence on the processing of T-SQL statements. For example, it is not clear whether the required data is already in the SQL Server's buffer cache or whether it must first be loaded from a armenia telegram screening storage system. This is often referred to as a warm or cold cache. Of course, the developers could simply execute the T-SQL statement several times on the test system. Then the data is very likely to be in the buffer cache. But is that realistic? No basic assumptions can be made about the buffer cache contents for a later productive database server. Especially if many parallel processes will be running.
The next point when evaluating the performance of database systems using time measurement is the general equipment of the server. Development teams are very keen to use the argument that real and binding testing is impossible without comparable hardware. For the responsible management, this argument is usually an understandable reason why the issue of performance can only be given limited attention during development. This is because it is only in the rarest of cases possible to provide comparable server systems. The resources processor cores (this includes GHz and the number, the choice of generation, etc.) and main memory are particularly cost-intensive. The resources storage and network connection are usually solved generically and are partly comparable between test and productive systems. Nevertheless, a team can fall back on the fact that it needs at least one server with 1.5 TB of main memory and 48 processor cores, for example, because comparable systems are used by customers. This is often not feasible for technical, organizational and, of course, budgetary reasons.