Why does this derived table improve performance?Optimising plans with XML readersHow to use merge hints to isolate complex queries in SQL ServerWhen converting a table valued function to inline, why do I get a lazy spool?improving performance - how to pass a table as parameter in sql server 2005?Wrong no of actual rows and why did statistics update help here?Using table variable instead of temp table makes query execution slowHow does this derived table with cross apply work?Large memory grant requests100% CPU with bad execution planEfficiency of Scalar UDF vs TVFSQL Server chooses Nested Loop join with dimensional table and make seek for each row

What is the meaning of 「隣のおじいさんは言いました」

How can I finally understand the confusing modal verb "мочь"?

Changing stroke width vertically but not horizontally in Inkscape

How to deal with employer who keeps me at work after working hours

How important are good looking people in a novel/story?

Can a good but unremarkable PhD student become an accomplished professor?

The selling of the sheep

What word describes the sound of an instrument based on the shape of the waveform of its sound?

Explaining intravenous drug abuse to a small child

How to preserve a rare version of a book?

Can anyone identify this unknown 1988 PC card from The Palantir Corporation?

What do you call a painting painted on a wall?

Two denim hijabs

Can an Iranian citizen enter the USA on a Dutch passport?

How is Pauli's exclusion principle still valid in these cases?

What's the 2-minute timer on mobile Deutsche Bahn tickets?

How do I, as a DM, handle a party that decides to set up an ambush in a dungeon?

What is the thing used to help pouring liquids called?

HSA - Continue to Invest?

Primes in a Diamond

My large rocket is still flipping over

How did the Apollo guidance computer handle parity bit errors?

Can a player choose to add detail and flavor to their character's spells and abilities?

Hostile Divisor Numbers



Why does this derived table improve performance?


Optimising plans with XML readersHow to use merge hints to isolate complex queries in SQL ServerWhen converting a table valued function to inline, why do I get a lazy spool?improving performance - how to pass a table as parameter in sql server 2005?Wrong no of actual rows and why did statistics update help here?Using table variable instead of temp table makes query execution slowHow does this derived table with cross apply work?Large memory grant requests100% CPU with bad execution planEfficiency of Scalar UDF vs TVFSQL Server chooses Nested Loop join with dimensional table and make seek for each row






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








18















I have a query which takes a json string as a parameter. The json is an array of latitude,longitude pairs.
An example input might be the following.



declare @json nvarchar(max)= N'[[40.7592024,-73.9771259],[40.7126492,-74.0120867]
,[41.8662374,-87.6908788],[37.784873,-122.4056546]]';


It calls a TVF that calculates the number of POIs around a geographical point, at 1,3,5,10 mile distances.



create or alter function [dbo].[fn_poi_in_dist](@geo geography)
returns table
with schemabinding as
return
select count_1 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 1,1,0e))
,count_3 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 3,1,0e))
,count_5 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 5,1,0e))
,count_10 = count(*)
from dbo.point_of_interest
where LatLong.STDistance(@geo) <= 1609.344e * 10


The intent of the json query is to bulk call this function. If I call it like this the performance is very poor taking nearly 10 seconds for just 4 points:



select row=[key]
,count_1
,count_3
,count_5
,count_10
from openjson(@json)
cross apply dbo.fn_poi_in_dist(
geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326))


plan = https://www.brentozar.com/pastetheplan/?id=HJDCYd_o4



However, moving the construction of the geography inside a derived table causes the performance to improve dramatically, completing the query in about 1 second.



select row=[key]
,count_1
,count_3
,count_5
,count_10
from (
select [key]
,geo = geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)
from openjson(@json)
) a
cross apply dbo.fn_poi_in_dist(geo)


plan = https://www.brentozar.com/pastetheplan/?id=HkSS5_OoE



The plans look virtually identical. Neither uses parallelism and both use the spatial index. There is an additional lazy spool on the slow plan that I can eliminate with the hint option(no_performance_spool). But the query performance does not change. It still remains much slower.



Running both with the added hint in a batch will weigh both queries equally.



Sql server version =
Microsoft SQL Server 2016 (SP1-CU7-GDR) (KB4057119) - 13.0.4466.4 (X64)



So my question is why does this matter? How can I know when I should calculate values inside a derived table or not?










share|improve this question



















  • 1





    By "weigh" do you mean estimated cost %? That number is virtually meaningless, especially when you're bringing in UDFs, JSON, CLR via geography, etc.

    – Aaron Bertrand
    May 2 at 14:53











  • I'm aware, but looking at IO stats they are identical too. Both do 358306 logical reads on the point_of_interest table, both scan the index 4602 times, and both generate a worktable and workfile. The estimator believes these plans are identical yet performance says otherwise.

    – Michael B
    May 2 at 14:59











  • It seems like actual CPU is the issue here, likely due to what Martin pointed out, not I/O. Unfortunately estimated costs are based on CPU and I/O combined and don't always reflect what happens in reality. If you generate actual plans using SentryOne Plan Explorer (I work there, but the tool is free with no strings), then change actual costs to CPU only, you might get better indicators of where all that CPU time was spent.

    – Aaron Bertrand
    May 2 at 15:19












  • @AaronBertrand - Does plan explorer show Actual Time Statistics on nodes? That would definitely help show the busy operators. Elapsed time for stream agg was 9,191 but only 2,982 for its child operator (of which filter took 2,387)

    – Martin Smith
    May 2 at 15:22







  • 1





    @MartinSmith Not per operator yet, no. We do surface those at the statement level. Currently we still rely on the initial implementation from the DMV before those additional metrics were added at the lower level. And we've been a bit busy working on something else you'll see soon. :-)

    – Aaron Bertrand
    May 2 at 15:38


















18















I have a query which takes a json string as a parameter. The json is an array of latitude,longitude pairs.
An example input might be the following.



declare @json nvarchar(max)= N'[[40.7592024,-73.9771259],[40.7126492,-74.0120867]
,[41.8662374,-87.6908788],[37.784873,-122.4056546]]';


It calls a TVF that calculates the number of POIs around a geographical point, at 1,3,5,10 mile distances.



create or alter function [dbo].[fn_poi_in_dist](@geo geography)
returns table
with schemabinding as
return
select count_1 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 1,1,0e))
,count_3 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 3,1,0e))
,count_5 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 5,1,0e))
,count_10 = count(*)
from dbo.point_of_interest
where LatLong.STDistance(@geo) <= 1609.344e * 10


The intent of the json query is to bulk call this function. If I call it like this the performance is very poor taking nearly 10 seconds for just 4 points:



select row=[key]
,count_1
,count_3
,count_5
,count_10
from openjson(@json)
cross apply dbo.fn_poi_in_dist(
geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326))


plan = https://www.brentozar.com/pastetheplan/?id=HJDCYd_o4



However, moving the construction of the geography inside a derived table causes the performance to improve dramatically, completing the query in about 1 second.



select row=[key]
,count_1
,count_3
,count_5
,count_10
from (
select [key]
,geo = geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)
from openjson(@json)
) a
cross apply dbo.fn_poi_in_dist(geo)


plan = https://www.brentozar.com/pastetheplan/?id=HkSS5_OoE



The plans look virtually identical. Neither uses parallelism and both use the spatial index. There is an additional lazy spool on the slow plan that I can eliminate with the hint option(no_performance_spool). But the query performance does not change. It still remains much slower.



Running both with the added hint in a batch will weigh both queries equally.



Sql server version =
Microsoft SQL Server 2016 (SP1-CU7-GDR) (KB4057119) - 13.0.4466.4 (X64)



So my question is why does this matter? How can I know when I should calculate values inside a derived table or not?










share|improve this question



















  • 1





    By "weigh" do you mean estimated cost %? That number is virtually meaningless, especially when you're bringing in UDFs, JSON, CLR via geography, etc.

    – Aaron Bertrand
    May 2 at 14:53











  • I'm aware, but looking at IO stats they are identical too. Both do 358306 logical reads on the point_of_interest table, both scan the index 4602 times, and both generate a worktable and workfile. The estimator believes these plans are identical yet performance says otherwise.

    – Michael B
    May 2 at 14:59











  • It seems like actual CPU is the issue here, likely due to what Martin pointed out, not I/O. Unfortunately estimated costs are based on CPU and I/O combined and don't always reflect what happens in reality. If you generate actual plans using SentryOne Plan Explorer (I work there, but the tool is free with no strings), then change actual costs to CPU only, you might get better indicators of where all that CPU time was spent.

    – Aaron Bertrand
    May 2 at 15:19












  • @AaronBertrand - Does plan explorer show Actual Time Statistics on nodes? That would definitely help show the busy operators. Elapsed time for stream agg was 9,191 but only 2,982 for its child operator (of which filter took 2,387)

    – Martin Smith
    May 2 at 15:22







  • 1





    @MartinSmith Not per operator yet, no. We do surface those at the statement level. Currently we still rely on the initial implementation from the DMV before those additional metrics were added at the lower level. And we've been a bit busy working on something else you'll see soon. :-)

    – Aaron Bertrand
    May 2 at 15:38














18












18








18


1






I have a query which takes a json string as a parameter. The json is an array of latitude,longitude pairs.
An example input might be the following.



declare @json nvarchar(max)= N'[[40.7592024,-73.9771259],[40.7126492,-74.0120867]
,[41.8662374,-87.6908788],[37.784873,-122.4056546]]';


It calls a TVF that calculates the number of POIs around a geographical point, at 1,3,5,10 mile distances.



create or alter function [dbo].[fn_poi_in_dist](@geo geography)
returns table
with schemabinding as
return
select count_1 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 1,1,0e))
,count_3 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 3,1,0e))
,count_5 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 5,1,0e))
,count_10 = count(*)
from dbo.point_of_interest
where LatLong.STDistance(@geo) <= 1609.344e * 10


The intent of the json query is to bulk call this function. If I call it like this the performance is very poor taking nearly 10 seconds for just 4 points:



select row=[key]
,count_1
,count_3
,count_5
,count_10
from openjson(@json)
cross apply dbo.fn_poi_in_dist(
geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326))


plan = https://www.brentozar.com/pastetheplan/?id=HJDCYd_o4



However, moving the construction of the geography inside a derived table causes the performance to improve dramatically, completing the query in about 1 second.



select row=[key]
,count_1
,count_3
,count_5
,count_10
from (
select [key]
,geo = geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)
from openjson(@json)
) a
cross apply dbo.fn_poi_in_dist(geo)


plan = https://www.brentozar.com/pastetheplan/?id=HkSS5_OoE



The plans look virtually identical. Neither uses parallelism and both use the spatial index. There is an additional lazy spool on the slow plan that I can eliminate with the hint option(no_performance_spool). But the query performance does not change. It still remains much slower.



Running both with the added hint in a batch will weigh both queries equally.



Sql server version =
Microsoft SQL Server 2016 (SP1-CU7-GDR) (KB4057119) - 13.0.4466.4 (X64)



So my question is why does this matter? How can I know when I should calculate values inside a derived table or not?










share|improve this question
















I have a query which takes a json string as a parameter. The json is an array of latitude,longitude pairs.
An example input might be the following.



declare @json nvarchar(max)= N'[[40.7592024,-73.9771259],[40.7126492,-74.0120867]
,[41.8662374,-87.6908788],[37.784873,-122.4056546]]';


It calls a TVF that calculates the number of POIs around a geographical point, at 1,3,5,10 mile distances.



create or alter function [dbo].[fn_poi_in_dist](@geo geography)
returns table
with schemabinding as
return
select count_1 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 1,1,0e))
,count_3 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 3,1,0e))
,count_5 = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 5,1,0e))
,count_10 = count(*)
from dbo.point_of_interest
where LatLong.STDistance(@geo) <= 1609.344e * 10


The intent of the json query is to bulk call this function. If I call it like this the performance is very poor taking nearly 10 seconds for just 4 points:



select row=[key]
,count_1
,count_3
,count_5
,count_10
from openjson(@json)
cross apply dbo.fn_poi_in_dist(
geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326))


plan = https://www.brentozar.com/pastetheplan/?id=HJDCYd_o4



However, moving the construction of the geography inside a derived table causes the performance to improve dramatically, completing the query in about 1 second.



select row=[key]
,count_1
,count_3
,count_5
,count_10
from (
select [key]
,geo = geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)
from openjson(@json)
) a
cross apply dbo.fn_poi_in_dist(geo)


plan = https://www.brentozar.com/pastetheplan/?id=HkSS5_OoE



The plans look virtually identical. Neither uses parallelism and both use the spatial index. There is an additional lazy spool on the slow plan that I can eliminate with the hint option(no_performance_spool). But the query performance does not change. It still remains much slower.



Running both with the added hint in a batch will weigh both queries equally.



Sql server version =
Microsoft SQL Server 2016 (SP1-CU7-GDR) (KB4057119) - 13.0.4466.4 (X64)



So my question is why does this matter? How can I know when I should calculate values inside a derived table or not?







sql-server sql-server-2016






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 2 at 14:35







Michael B

















asked May 2 at 14:13









Michael BMichael B

24019




24019







  • 1





    By "weigh" do you mean estimated cost %? That number is virtually meaningless, especially when you're bringing in UDFs, JSON, CLR via geography, etc.

    – Aaron Bertrand
    May 2 at 14:53











  • I'm aware, but looking at IO stats they are identical too. Both do 358306 logical reads on the point_of_interest table, both scan the index 4602 times, and both generate a worktable and workfile. The estimator believes these plans are identical yet performance says otherwise.

    – Michael B
    May 2 at 14:59











  • It seems like actual CPU is the issue here, likely due to what Martin pointed out, not I/O. Unfortunately estimated costs are based on CPU and I/O combined and don't always reflect what happens in reality. If you generate actual plans using SentryOne Plan Explorer (I work there, but the tool is free with no strings), then change actual costs to CPU only, you might get better indicators of where all that CPU time was spent.

    – Aaron Bertrand
    May 2 at 15:19












  • @AaronBertrand - Does plan explorer show Actual Time Statistics on nodes? That would definitely help show the busy operators. Elapsed time for stream agg was 9,191 but only 2,982 for its child operator (of which filter took 2,387)

    – Martin Smith
    May 2 at 15:22







  • 1





    @MartinSmith Not per operator yet, no. We do surface those at the statement level. Currently we still rely on the initial implementation from the DMV before those additional metrics were added at the lower level. And we've been a bit busy working on something else you'll see soon. :-)

    – Aaron Bertrand
    May 2 at 15:38













  • 1





    By "weigh" do you mean estimated cost %? That number is virtually meaningless, especially when you're bringing in UDFs, JSON, CLR via geography, etc.

    – Aaron Bertrand
    May 2 at 14:53











  • I'm aware, but looking at IO stats they are identical too. Both do 358306 logical reads on the point_of_interest table, both scan the index 4602 times, and both generate a worktable and workfile. The estimator believes these plans are identical yet performance says otherwise.

    – Michael B
    May 2 at 14:59











  • It seems like actual CPU is the issue here, likely due to what Martin pointed out, not I/O. Unfortunately estimated costs are based on CPU and I/O combined and don't always reflect what happens in reality. If you generate actual plans using SentryOne Plan Explorer (I work there, but the tool is free with no strings), then change actual costs to CPU only, you might get better indicators of where all that CPU time was spent.

    – Aaron Bertrand
    May 2 at 15:19












  • @AaronBertrand - Does plan explorer show Actual Time Statistics on nodes? That would definitely help show the busy operators. Elapsed time for stream agg was 9,191 but only 2,982 for its child operator (of which filter took 2,387)

    – Martin Smith
    May 2 at 15:22







  • 1





    @MartinSmith Not per operator yet, no. We do surface those at the statement level. Currently we still rely on the initial implementation from the DMV before those additional metrics were added at the lower level. And we've been a bit busy working on something else you'll see soon. :-)

    – Aaron Bertrand
    May 2 at 15:38








1




1





By "weigh" do you mean estimated cost %? That number is virtually meaningless, especially when you're bringing in UDFs, JSON, CLR via geography, etc.

– Aaron Bertrand
May 2 at 14:53





By "weigh" do you mean estimated cost %? That number is virtually meaningless, especially when you're bringing in UDFs, JSON, CLR via geography, etc.

– Aaron Bertrand
May 2 at 14:53













I'm aware, but looking at IO stats they are identical too. Both do 358306 logical reads on the point_of_interest table, both scan the index 4602 times, and both generate a worktable and workfile. The estimator believes these plans are identical yet performance says otherwise.

– Michael B
May 2 at 14:59





I'm aware, but looking at IO stats they are identical too. Both do 358306 logical reads on the point_of_interest table, both scan the index 4602 times, and both generate a worktable and workfile. The estimator believes these plans are identical yet performance says otherwise.

– Michael B
May 2 at 14:59













It seems like actual CPU is the issue here, likely due to what Martin pointed out, not I/O. Unfortunately estimated costs are based on CPU and I/O combined and don't always reflect what happens in reality. If you generate actual plans using SentryOne Plan Explorer (I work there, but the tool is free with no strings), then change actual costs to CPU only, you might get better indicators of where all that CPU time was spent.

– Aaron Bertrand
May 2 at 15:19






It seems like actual CPU is the issue here, likely due to what Martin pointed out, not I/O. Unfortunately estimated costs are based on CPU and I/O combined and don't always reflect what happens in reality. If you generate actual plans using SentryOne Plan Explorer (I work there, but the tool is free with no strings), then change actual costs to CPU only, you might get better indicators of where all that CPU time was spent.

– Aaron Bertrand
May 2 at 15:19














@AaronBertrand - Does plan explorer show Actual Time Statistics on nodes? That would definitely help show the busy operators. Elapsed time for stream agg was 9,191 but only 2,982 for its child operator (of which filter took 2,387)

– Martin Smith
May 2 at 15:22






@AaronBertrand - Does plan explorer show Actual Time Statistics on nodes? That would definitely help show the busy operators. Elapsed time for stream agg was 9,191 but only 2,982 for its child operator (of which filter took 2,387)

– Martin Smith
May 2 at 15:22





1




1





@MartinSmith Not per operator yet, no. We do surface those at the statement level. Currently we still rely on the initial implementation from the DMV before those additional metrics were added at the lower level. And we've been a bit busy working on something else you'll see soon. :-)

– Aaron Bertrand
May 2 at 15:38






@MartinSmith Not per operator yet, no. We do surface those at the statement level. Currently we still rely on the initial implementation from the DMV before those additional metrics were added at the lower level. And we've been a bit busy working on something else you'll see soon. :-)

– Aaron Bertrand
May 2 at 15:38











1 Answer
1






active

oldest

votes


















15














I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as can SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)




The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the openjson function) - whereas it is done more than 100,000 times that in the slow plan.



In the fast plan...



geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)


Is assigned to Expr1000 in the compute scalar to the left of the openjson function. This corresponds to geo in your derived table definition.



enter image description here



In the fast plan the filter and stream aggregate reference Expr1000. In the slow plan they reference the full underlying expression.



Stream aggregate properties



enter image description here



The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. 110,520 * 3 + 116,995 = 448,555. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole.



You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)



enter image description here



The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.




By the way.... In general it is not a sure thing that underlying expressions with labels like Expr1000 are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.






share|improve this answer

























  • As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

    – Michael B
    May 2 at 16:39











  • Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

    – Michael B
    May 2 at 17:02











  • Sorry for the amateur question, but what tool is shown in your images?

    – BlueRaja - Danny Pflughoeft
    May 2 at 18:23






  • 1





    @BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

    – Martin Smith
    May 2 at 18:26











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f237217%2fwhy-does-this-derived-table-improve-performance%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









15














I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as can SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)




The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the openjson function) - whereas it is done more than 100,000 times that in the slow plan.



In the fast plan...



geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)


Is assigned to Expr1000 in the compute scalar to the left of the openjson function. This corresponds to geo in your derived table definition.



enter image description here



In the fast plan the filter and stream aggregate reference Expr1000. In the slow plan they reference the full underlying expression.



Stream aggregate properties



enter image description here



The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. 110,520 * 3 + 116,995 = 448,555. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole.



You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)



enter image description here



The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.




By the way.... In general it is not a sure thing that underlying expressions with labels like Expr1000 are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.






share|improve this answer

























  • As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

    – Michael B
    May 2 at 16:39











  • Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

    – Michael B
    May 2 at 17:02











  • Sorry for the amateur question, but what tool is shown in your images?

    – BlueRaja - Danny Pflughoeft
    May 2 at 18:23






  • 1





    @BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

    – Martin Smith
    May 2 at 18:26















15














I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as can SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)




The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the openjson function) - whereas it is done more than 100,000 times that in the slow plan.



In the fast plan...



geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)


Is assigned to Expr1000 in the compute scalar to the left of the openjson function. This corresponds to geo in your derived table definition.



enter image description here



In the fast plan the filter and stream aggregate reference Expr1000. In the slow plan they reference the full underlying expression.



Stream aggregate properties



enter image description here



The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. 110,520 * 3 + 116,995 = 448,555. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole.



You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)



enter image description here



The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.




By the way.... In general it is not a sure thing that underlying expressions with labels like Expr1000 are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.






share|improve this answer

























  • As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

    – Michael B
    May 2 at 16:39











  • Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

    – Michael B
    May 2 at 17:02











  • Sorry for the amateur question, but what tool is shown in your images?

    – BlueRaja - Danny Pflughoeft
    May 2 at 18:23






  • 1





    @BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

    – Martin Smith
    May 2 at 18:26













15












15








15







I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as can SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)




The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the openjson function) - whereas it is done more than 100,000 times that in the slow plan.



In the fast plan...



geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)


Is assigned to Expr1000 in the compute scalar to the left of the openjson function. This corresponds to geo in your derived table definition.



enter image description here



In the fast plan the filter and stream aggregate reference Expr1000. In the slow plan they reference the full underlying expression.



Stream aggregate properties



enter image description here



The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. 110,520 * 3 + 116,995 = 448,555. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole.



You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)



enter image description here



The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.




By the way.... In general it is not a sure thing that underlying expressions with labels like Expr1000 are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.






share|improve this answer















I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as can SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)




The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the openjson function) - whereas it is done more than 100,000 times that in the slow plan.



In the fast plan...



geography::Point(
convert(float,json_value(value,'$[0]'))
,convert(float,json_value(value,'$[1]'))
,4326)


Is assigned to Expr1000 in the compute scalar to the left of the openjson function. This corresponds to geo in your derived table definition.



enter image description here



In the fast plan the filter and stream aggregate reference Expr1000. In the slow plan they reference the full underlying expression.



Stream aggregate properties



enter image description here



The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. 110,520 * 3 + 116,995 = 448,555. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole.



You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)



enter image description here



The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.




By the way.... In general it is not a sure thing that underlying expressions with labels like Expr1000 are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.







share|improve this answer














share|improve this answer



share|improve this answer








edited 2 days ago

























answered May 2 at 16:07









Martin SmithMartin Smith

65.3k10177263




65.3k10177263












  • As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

    – Michael B
    May 2 at 16:39











  • Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

    – Michael B
    May 2 at 17:02











  • Sorry for the amateur question, but what tool is shown in your images?

    – BlueRaja - Danny Pflughoeft
    May 2 at 18:23






  • 1





    @BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

    – Martin Smith
    May 2 at 18:26

















  • As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

    – Michael B
    May 2 at 16:39











  • Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

    – Michael B
    May 2 at 17:02











  • Sorry for the amateur question, but what tool is shown in your images?

    – BlueRaja - Danny Pflughoeft
    May 2 at 18:23






  • 1





    @BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

    – Martin Smith
    May 2 at 18:26
















As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

– Michael B
May 2 at 16:39





As an aside, if I switch the query to use a cross apply to generate the geography, I also get the fast plan. cross apply(select geo=geography::Point( convert(float,json_value(value,'$[0]')) ,convert(float,json_value(value,'$[1]')) ,4326))f

– Michael B
May 2 at 16:39













Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

– Michael B
May 2 at 17:02





Unfortunate, but I am wondering if there's an easier way to get it generate the fast plan.

– Michael B
May 2 at 17:02













Sorry for the amateur question, but what tool is shown in your images?

– BlueRaja - Danny Pflughoeft
May 2 at 18:23





Sorry for the amateur question, but what tool is shown in your images?

– BlueRaja - Danny Pflughoeft
May 2 at 18:23




1




1





@BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

– Martin Smith
May 2 at 18:26





@BlueRaja-DannyPflughoeft these are execution plans shown in management studio (the icons used in SSMS have been updated in recent versions if that was the reason for the question)

– Martin Smith
May 2 at 18:26

















draft saved

draft discarded
















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f237217%2fwhy-does-this-derived-table-improve-performance%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Get product attribute by attribute group code in magento 2get product attribute by product attribute group in magento 2Magento 2 Log Bundle Product Data in List Page?How to get all product attribute of a attribute group of Default attribute set?Magento 2.1 Create a filter in the product grid by new attributeMagento 2 : Get Product Attribute values By GroupMagento 2 How to get all existing values for one attributeMagento 2 get custom attribute of a single product inside a pluginMagento 2.3 How to get all the Multi Source Inventory (MSI) locations collection in custom module?Magento2: how to develop rest API to get new productsGet product attribute by attribute group code ( [attribute_group_code] ) in magento 2

Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

Magento 2.3: How do i solve this, Not registered handle, on custom form?How can i rewrite TierPrice Block in Magento2magento 2 captcha not rendering if I override layout xmlmain.CRITICAL: Plugin class doesn't existMagento 2 : Problem while adding custom button order view page?Magento 2.2.5: Overriding Admin Controller sales/orderMagento 2.2.5: Add, Update and Delete existing products Custom OptionsMagento 2.3 : File Upload issue in UI Component FormMagento2 Not registered handleHow to configured Form Builder Js in my custom magento 2.3.0 module?Magento 2.3. How to create image upload field in an admin form