Quantcast
Channel: SCN: Message List
Viewing all articles
Browse latest Browse all 3548

Re: "configure size for data cache (tempdb_cache1) is not sufficient to handle the sort buffers (4005) requested. "

$
0
0

If all ExecutionCounts are 0 for all plans, then it is possible that the proc starts executing and execution is aborted - e.g. client kills the query/session....if there exec times/cpu times seem large, then it is a possibility....  

 

4068458 memory pages as HWM for MEMC_SCOMPILE1 is roughly 8GB...which for some reason doesn't appear to be aggregating correctly to monProcedureCacheModuleUsage as HWM there should minimally be the max(HWM) of ModuleID=5 from monProcedureCacheMemoryUsage - you might want to open an incident on that so someone can look into it.   The 4GB for Procedure Objects looks about right given what you showed before....

 

Yes.....so here is the deal on partitioned tables.   If you update statistics on the entire table, then you get #steps across all the partitions total.   So, for example, if you have 10 partitions and you ask for 200 steps for update index stats <tablename>, then likely you would ge 20 steps per partition.   However, if you instead update index stats <tablename> <partitionname> with 200 values, then you would get 200 steps for that partition.....if you did it for 10 partitions, you would have 2000 total steps.   Depending the number of index keys, etc - could be quite a lot.

 

During query optimization, we would likely load all steps from all partitions - unless partition elimination can be done - for example, static SQL sent in from a isql session with predicates in line can do partition elimination during optimization.   However, SQL queries inside stored procs, cached statements or fully prepared statements might not be so lucky.   As a result, we might and likely load all the stats for all partitions. 

 

Yes - anything above 2000 steps is likley too high.   I would take a look and see how many of those tables with >2000 steps are NOT partitioned and immediately have your DBA's do a delete stats and then re-update index stats requesting 100 steps with a histogram tuning factor of 5 (yes, this is part of the update stats syntax now) on unpartitioned tables.   Then for partitioned tables, it gets a bit fun.  If you ONLY have local indexes (no global indexes) then I would take a look to see how many stats 10-20 steps per partition is and see if that can't be worked out.   If you have global indexes, then I would drop all the stats (delete stats) and run update index stats at the table level with 250 steps and histogram tuning factor of 5 (again).    If using a rolling partition scheme, you could always update statistics on the current partition with 100 steps (default HTF), and update stats on older partitions (rolled off) with 20 steps (default HTF).

 

However, you also may want to look real hard at upgrading to ASE 16sp01+.   There was an extensive amount of work done in ASE 16sp01 for optimizing with partitions - particularly when there were large numbers of partitions and/or parallel queries.   We are talking rather dramatic 10x or more reduction in query optimization time - and likely a ton fewer resources as likely a lot of unnecessary optimization paths were not processed any longer (e.g. better search space pruning).  This would help as once optimization is done, the proc cache used for stats for one query optimization can be more quickly returned to the proc buffer pool to be used for others.

 

My general suspicion is that all the recompiles and etc. are hitting the proc cache hard because of these large quantities of steps - especially given your concurrency (150+ simultaneous executions)...and that your tempdb cache was simply the straw that broke the camel's back.


Viewing all articles
Browse latest Browse all 3548

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>