Showing posts with label function. Show all posts
Showing posts with label function. Show all posts

Monday, March 26, 2012

Product of a Field

Hi All,

I have a field with numbers (double datatype) as values and I want a aggregate function that gives me the product of all the values in that field. Is there a way to do that in SSRS?

Check this post where I have suggestes a solution.

Shyam

|||

Check this post where I have suggestes a solution.

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1384503&SiteID=1

Shyam

sql

Product of a Field

Hi All,

I have a field with numbers (double datatype) as values and I want a aggregate function that gives me the product of all the values in that field. Is there a way to do that in SSRS?

Check this post where I have suggestes a solution.

Shyam

|||

Check this post where I have suggestes a solution.

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1384503&SiteID=1

Shyam

Wednesday, March 21, 2012

processor maxed out block other process...

Hi,
I'm using a 4 Xeon Hyperthreaded server with Windows 2003 ent. and SQL
Server 2000 ent. (8.0.818)
Whenever a user launch a specific function, let's say a sp, he uses 100% of
one of the 8 CPUs during 30-40 minutes and the workload jump from one
processor to another (every time the sp makes a new query).
I read in here that most of the query, especially the simple ones or the
bulk ones run on only one processor, so I think the behaviour of the sp
described above is normal.
My problem is when a second user comes in and run another application,
running on another database, and run a big process (again a sp doing various
things during 30 minutes or so) I don't see 2 processors running at 100% but
only one, and the process time of the two sp running simultanneously is
almost doubled...
Maybe I have a performancebottleneck somewhere, but I don't see where, I
have some pretty good SCSI Disk, 6GB of Ram almost never used more than 30%.
Except upgrading my processors speed is there a way to have better
performance? Like forcing more parrallelism on some process or queries. I
read something about query plan complexity defining the level of parallelism
of a query, is there a way to tune this level?
Thanks in advance.
Best regard,
Christophe.
There is a setting that adjusts the threshhold for parallism. sp_configure will show you the setting.
Of course indexing well is almost always the best path to follow.
|||"Ken Dutton" <fj60landcruiser@.yahoo.com> a crit dans le message de
news:505E0AD6-A9E0-4BE4-9214-1B3BC7BEE3B8@.microsoft.com...
> There is a setting that adjusts the threshhold for parallism.
sp_configure will show you the setting.
Thanks for your answer,
The threshold is currently at 3. I read in the BOL it's in millisecond, so I
don't think I can make it lower...
But that helped me see I didn't activate the AWE parameter on this server...
I was wondering why I never used more than 2GB!!

> Of course indexing well is almost always the best path to follow.
Yay!
That's what I'm telling my sql devs and db designers every morning, but that
doesn't seem to motivate them... I think a wooden club with long nails will
do the trick, but I'll try to coerce them into indexing their tables and
views one last time...
I'll do some trace and index tuning before, I'm not a monster... yet.
Anyway, thanks again.
Christophe

processor maxed out block other process...

Hi,
I'm using a 4 Xeon Hyperthreaded server with Windows 2003 ent. and SQL
Server 2000 ent. (8.0.818)
Whenever a user launch a specific function, let's say a sp, he uses 100% of
one of the 8 CPUs during 30-40 minutes and the workload jump from one
processor to another (every time the sp makes a new query).
I read in here that most of the query, especially the simple ones or the
bulk ones run on only one processor, so I think the behaviour of the sp
described above is normal.
My problem is when a second user comes in and run another application,
running on another database, and run a big process (again a sp doing various
things during 30 minutes or so) I don't see 2 processors running at 100% but
only one, and the process time of the two sp running simultanneously is
almost doubled...
Maybe I have a performancebottleneck somewhere, but I don't see where, I
have some pretty good SCSI Disk, 6GB of Ram almost never used more than 30%.
Except upgrading my processors speed is there a way to have better
performance? Like forcing more parrallelism on some process or queries. I
read something about query plan complexity defining the level of parallelism
of a query, is there a way to tune this level?
Thanks in advance.
Best regard,
Christophe.There is a setting that adjusts the threshhold for parallism. sp_configure
will show you the setting.
Of course indexing well is almost always the best path to follow.|||"Ken Dutton" <fj60landcruiser@.yahoo.com> a crit dans le message de
news:505E0AD6-A9E0-4BE4-9214-1B3BC7BEE3B8@.microsoft.com...
> There is a setting that adjusts the threshhold for parallism.
sp_configure will show you the setting.
Thanks for your answer,
The threshold is currently at 3. I read in the BOL it's in millisecond, so I
don't think I can make it lower...
But that helped me see I didn't activate the AWE parameter on this server...
I was wondering why I never used more than 2GB!!

> Of course indexing well is almost always the best path to follow.
Yay!
That's what I'm telling my sql devs and db designers every morning, but that
doesn't seem to motivate them... I think a wooden club with long nails will
do the trick, but I'll try to coerce them into indexing their tables and
views one last time...
I'll do some trace and index tuning before, I'm not a monster... yet.
Anyway, thanks again.
Christophesql

Saturday, February 25, 2012

Process Cube

I have a cube with one customer count measure that use DistinctCount Aggregate Function. My testing Environment is Window Server 2003 x64 Sp1, 300 GB of Ram, and SQL 2005 EE. When I full process the cube, I use the MS Performance tool to monitor the MSAS:Proc Aggregations. I find out the Temp file bytes written/sec has been used, but the usage of the RAM never go above 15 GB. Why the temp file still in use. When I use AS2000 I know I can adjust the Process Buffer size to avoid the use of Temp file to increase the process performance. Should I take the same action in AS2005. But my OLAP\Process\BufferMemoryLimit has been set to 60 => 180GB can be use. Why temp file still in use? Should I adjust the OLAP\ProcessPlan\DistinctBuffer too that is the new item in AS2005?

Analysis Services 2005 is quite different in the way it does processing and manages memory.

First I would recommend you look at the performance guide for some clues on how to improve processing performance. I also not sure what the real concern is: is that your are seeing Analysis Server using less memory during processing? Is that you compare AS2000 and AS2005 and you are seeing slower processing performance?

Just to note, and you might know it alreday, processing of distinct count partitions is quite different from processing of regular partitions. Same goes for querying. Records stored in distinct count partitions as stored sorted. You can even see that for processing such partition Analysis Server will send different SQL query, it will ask data from SQL Server to come in sorted order.
What I am getting at is processing and querying of distinct count partitions is quite different from regular partitions. If you wanted to optimize processing and query performance the first and most important is to make sure your partition your cube along dimension that you get your distinct count for.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||my concern is how the memory usage in AS2005. Since my server has 300GB of memory, but AS2005 doesn't use up all the memory during the process, instead it use the Temp file that can slow down the process performance.