Saturday, February 25, 2012
Process Detail (T-SQL)
Just want know where can we find the last T-SQL being executed on a process. This is the one that pops-up when we double-clik a process on the Process Info List. Thank You in Advance :-)dbcc inputbuffer (@.spid)
Process Cube using SQL DTS
I have created a package to do this which simply returns with the error
'1 task(s) failed during execution'. I can process the dimensions and
cubes manually in analysis Manager fine.
SQL and analysis Manager are both on the same server and the task
appears to be running with the domain administrator user.
Cany anyone help?
Cheers
Brettos.
brettos
Posted via http://www.webservertalk.com
View this thread: http://www.webservertalk.com/message542345.html
In workflow properties, set "Execute on main package thread", including all
packages that contains OLAP processing.
James Ma
"brettos" wrote:
> Hi
> I have created a package to do this which simply returns with the error
> '1 task(s) failed during execution'. I can process the dimensions and
> cubes manually in analysis Manager fine.
> SQL and analysis Manager are both on the same server and the task
> appears to be running with the domain administrator user.
> Cany anyone help?
> Cheers
> Brettos.
>
> --
> brettos
> Posted via http://www.webservertalk.com
> View this thread: http://www.webservertalk.com/message542345.html
>
|||To help in your debuging.
Open The package in design mode.
Right Click and Select "Package Properties"
Select The "Logging" tab.
Provide a log file location to use during processing of the package.
save and close the package.
Now run the package.
Open the log file and see what details you can gleam.
report back to us.
Hope this helps.
dlr
"brettos" <brettos.1gk5p2@.mail.webservertalk.com> wrote in message
news:brettos.1gk5p2@.mail.webservertalk.com...
> Hi
> I have created a package to do this which simply returns with the error
> '1 task(s) failed during execution'. I can process the dimensions and
> cubes manually in analysis Manager fine.
> SQL and analysis Manager are both on the same server and the task
> appears to be running with the domain administrator user.
> Cany anyone help?
> Cheers
> Brettos.
>
> --
> brettos
> Posted via http://www.webservertalk.com
> View this thread: http://www.webservertalk.com/message542345.html
>
Process Cube using SQL DTS
I have created a package to do this which simply returns with the error '1 t
ask(s) failed during execution'. I can process the dimensions and cubes man
ually in analysis Manager fine.
SQL and analysis Manager are both on the same server and the task appears to
be running with the domain administrator user.
Cany anyone help?
Cheers
Brettos.In workflow properties, set "Execute on main package thread", including all
packages that contains OLAP processing.
James Ma
"brettos" wrote:
> Hi
> I have created a package to do this which simply returns with the error
> '1 task(s) failed during execution'. I can process the dimensions and
> cubes manually in analysis Manager fine.
> SQL and analysis Manager are both on the same server and the task
> appears to be running with the domain administrator user.
> Cany anyone help?
> Cheers
> Brettos.
>
> --
> brettos
> ---
> Posted via http://www.webservertalk.com
> ---
> View this thread: http://www.webservertalk.com/message542345.html
>|||To help in your debuging.
Open The package in design mode.
Right Click and Select "Package Properties"
Select The "Logging" tab.
Provide a log file location to use during processing of the package.
save and close the package.
Now run the package.
Open the log file and see what details you can gleam.
report back to us.
Hope this helps.
dlr
"brettos" <brettos.1gk5p2@.mail.webservertalk.com> wrote in message
news:brettos.1gk5p2@.mail.webservertalk.com...
> Hi
> I have created a package to do this which simply returns with the error
> '1 task(s) failed during execution'. I can process the dimensions and
> cubes manually in analysis Manager fine.
> SQL and analysis Manager are both on the same server and the task
> appears to be running with the domain administrator user.
> Cany anyone help?
> Cheers
> Brettos.
>
> --
> brettos
> ---
> Posted via http://www.webservertalk.com
> ---
> View this thread: http://www.webservertalk.com/message542345.html
>
Process Cube using Integration Services
Hi,
I want to process my cube using Process Data and Process Index instead of the Process Full. However, after configuring the 2 Analysis Services Processing Tasks (one for process data and the other for process index) and were executed sequentially (process data first then process index), I got this error:
Errors in the metadata manager. The process type specified for the CASES cube is not valid since it is not processed
Have I done the right thing?
The reason why I prefer using the Process Data and then Process Index, it's because it is much faster than the latter.
cherriesh
It's interesting that you're seeing an error, because 'Process Full' is nothing more than 'Process Data' and then 'Process Index' under the covers.
I'll admit that I was skeptical that executing the two sequentially was faster for you than process full, but I found an Analysis Services best practices whitepaper that confirmed this could happen.
http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/ssaspbpr.mspx
Are you sure you are using 'Process Data' on the cube object itself and not a measure group or partition?
Also, you may want to post this question in the Analysis Services forum.
|||Hi,
I did the process data for both the dimensions and cube. and did also process index for the two.
am i right there?
cherriesh
|||I think that might be your issue. If it processes a dimension after the cube that would explain the error message you received on the step to process the index. Try doing process data for just the cube (it should do the dimensions as part of this).
process cube but leave transaction open?
We have a massive cube. All but one of the fact tables are by 3am. But the final fact table isn't loaded in the SQL data warehouse until 11am. The users want the cube processed and available as immediately as possible. But the cube can't show ANY updated data until it shows ALL updated data including the fact table from 11am. Any thoughts on making this happen?
One option would be to process the cube at 3am in another database. Then process the final measure group at 11am. Then use the Synchronize command to get those changes to the live database soon after 11am. But there are two drawbacks... 1) This requires twice the amount of disk space as before and 2) I can't figure out a way to synchronize two databases on the same server.
Another option would be to process the cube at 3am but leave that transaction open (so end users can't see the new data) for 8 hours until you process the final measure group and then close the transaction. This doesn't seem like a good plan, but I wanted to hear people's thoughts.
I would certainly experiment with leaving transaction open for 8 hours. I don't see why it doesn't seem like a good plan to you. Note, that you will still have doubled usage of disk space, since two versions will be coexisting for the 8 hours.|||Well, maybe it's not a dumb idea. I guess the reason I was worried about it is just my relational background and the idea to never leave transactions open. But I guess that doesn't matter in this case.
So my main concern with the technical feasibility is whether the server will clean up the idle session (in so doing cancel the transaction). Am I going to have to run some cheap discover function ever 5 mins for 8 hours to keep the session alive? Any other way?
Thanks for the feedback Mosha.
|||If server canceling the idle session becomes a problem - you can always bump up the session expiration timeout.
|||I've got this working. Cube processing is broken up into two parts with the connection being dropped in between. Just like you said, it works fine. Just had to bump up the IdleOrphanSessionTimeout setting.
There is one problem, though. If we get a failure on part 2 of the processing (like if there's an invalid dimension key in a fact table) then that failure rolls back the entire transaction, not just the stuff being processed during part 2. Wondering if there's a way around this. I can walk you through reproducing this against Adventure Works:
First, change the query binding of the Reseller_Sales_2004 partition of the Reseller Sales measure group to be:
SELECT [dbo].[FactResellerSales].[ProductKey],-999 as OrderDateKey,[dbo].[FactResellerSales].[DueDateKey],[dbo].[FactResellerSales].[ShipDateKey],[dbo].[FactResellerSales].[ResellerKey], [dbo].[FactResellerSales].[EmployeeKey],[dbo].[FactResellerSales].[PromotionKey],[dbo].[FactResellerSales].[CurrencyKey],[dbo].[FactResellerSales].[SalesTerritoryKey],[dbo].[FactResellerSales].[SalesOrderNumber],[dbo].[FactResellerSales].[SalesOrderLineNumber],[dbo].[FactResellerSales].[RevisionNumber],[dbo].[FactResellerSales].[OrderQuantity],[dbo].[FactResellerSales].[UnitPrice],[dbo].[FactResellerSales].[ExtendedAmount],[dbo].[FactResellerSales].[UnitPriceDiscountPct],[dbo].[FactResellerSales].[DiscountAmount],[dbo].[FactResellerSales].[ProductStandardCost],[dbo].[FactResellerSales].[TotalProductCost],[dbo].[FactResellerSales].[SalesAmount],[dbo].[FactResellerSales].[TaxAmt],[dbo].[FactResellerSales].[Freight],[dbo].[FactResellerSales].[CarrierTrackingNumber],[dbo].[FactResellerSales].[CustomerPONumber]
FROM [dbo].[FactResellerSales]
WHERE OrderDateKey >= '915' AND OrderDateKey <= '1280'
(Note that "-999 as OrderDateKey" will cause a failure.)
Second, pull up the directory contains the 2004 partition for Internet Sales. (On your computer it should be something like c:\program files\Microsoft SQL Server\mssql.2\OLAP\data\Adventure Works DW.0.db\Adventure Works DW.18.cub\Fact Internet Sales 1.18.det\Internet_Sales_2004.16.prt) After part 1 completes, you'll see the number of files in this directory double. After part 2 fails, you'll see those new files disappear as the transaction was rolled back.
Then, compile a little VB.NET app, put a breakpoint on the Sleep command so you can see the number of files in that 2004 partition directory double. Then run it:
Public Shared Function Main(ByVal args As String()) As Integer
Dim sServer As String = "localhost"
Dim sDatabase As String = "Adventure Works DW"
ProcessPart1(sServer, sDatabase)
System.Threading.Thread.Sleep(5000) 'simulate a break in the program
ProcessPart2(sServer, sDatabase)
End Function
Public Shared Sub ProcessPart1(ByVal sServer As String, ByVal sDatabase As String)
Dim server As New Server()
server.Connect(sServer)
Dim database As Database = server.Databases.FindByName(sDatabase)
'save the session ID so we can retrieve it later and connect to this session later
'alternately, it could be saved to a SQL database instead
database.Annotations.SetText("ProcessCubeSessionID", server.SessionID)
database.Update(UpdateOptions.Default, UpdateMode.Update)
server.BeginTransaction()
Try
Dim cube As Cube = database.Cubes.FindByName("Adventure Works")
Dim mg As MeasureGroup = cube.MeasureGroups.FindByName("Internet Sales")
mg.Process(ProcessType.ProcessFull)
'after processing has succeeded, change the orphan session timeout
'to 24 hours so that this session we're about to leave open won't timeout
'before we're done with it
server.ServerProperties("IdleOrphanSessionTimeout").Value = CStr(24 * 60 * 60)
server.Update()
'false means don't end session
'leave the session open so we can connect to it later
'and continue with the open transaction
server.Disconnect(False)
'don't commit the transaction now
'leave it open until we finish the processing later
Catch ex As Exception
server.RollbackTransaction()
'blank out the ProcessCubeSessionID annotation
database.Annotations.SetText("ProcessCubeSessionID", Nothing, True)
database.Update(UpdateOptions.Default, UpdateMode.Update)
server.Disconnect(True)
Throw
End Try
End Sub
Public Shared Sub ProcessPart2(ByVal sServer As String, ByVal sDatabase As String)
Dim server As New Server()
server.Connect(sServer)
Dim database As Database = server.Databases.FindByName(sDatabase)
Dim oldSession As String = database.Annotations.GetText("ProcessCubeSessionID")
If String.IsNullOrEmpty(oldSession) Then
Throw New Exception("Was not able to find the existing session which " _
& "processed the other 95% of the cube and has the open transaction! " _
& "The cube will not be processed.")
End If
server.Disconnect(True)
Try
server.Connect(sServer, oldSession)
Dim cube As Cube = database.Cubes.FindByName("Adventure Works")
Dim mg As MeasureGroup = cube.MeasureGroups.FindByName("Reseller Sales")
mg.Process(ProcessType.ProcessFull) 'BUG?: if this fails, it rolls back the transaction!!!
server.CommitTransaction()
'blank out the ProcessCubeSessionID annotation
database.Annotations.SetText("ProcessCubeSessionID", Nothing, True)
database.Update(UpdateOptions.Default, UpdateMode.Update)
'set this property back to the default now that
'we're done needing orphan sessions to survive
server.ServerProperties("IdleOrphanSessionTimeout").Value = _
server.ServerProperties("IdleOrphanSessionTimeout").DefaultValue
server.Update()
server.Disconnect(True) 'true means to end session
Catch ex As Exception
'there was a problem with the last 5% of the processing...
'so leave the session open so the problem can be corrected and we can
'start with 95% of the processing from earlier in the morning
'being completed already
server.Disconnect(False) 'false means don't end session
Throw
End Try
End Sub
If you search for the line in the code which has a comment saying "BUG?" that's where the processing fails because of our tweak to the partition query. When that processing fails.
|||I'd still like to hear if anyone knows a workaround, but I've posted this as a bug:
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=278000
|||I expect the bug that you filed to be resolved "By Design". The definition of the transaction is that it either fully succeed or fully fails. That's the letter "A" in the ACID definition - A for Atomicy. You have very long transaction, but it doesn't matter - if anything fails within transaction, the entire transaction is rolled back.process cube but leave transaction open?
We have a massive cube. All but one of the fact tables are by 3am. But the final fact table isn't loaded in the SQL data warehouse until 11am. The users want the cube processed and available as immediately as possible. But the cube can't show ANY updated data until it shows ALL updated data including the fact table from 11am. Any thoughts on making this happen?
One option would be to process the cube at 3am in another database. Then process the final measure group at 11am. Then use the Synchronize command to get those changes to the live database soon after 11am. But there are two drawbacks... 1) This requires twice the amount of disk space as before and 2) I can't figure out a way to synchronize two databases on the same server.
Another option would be to process the cube at 3am but leave that transaction open (so end users can't see the new data) for 8 hours until you process the final measure group and then close the transaction. This doesn't seem like a good plan, but I wanted to hear people's thoughts.
I would certainly experiment with leaving transaction open for 8 hours. I don't see why it doesn't seem like a good plan to you. Note, that you will still have doubled usage of disk space, since two versions will be coexisting for the 8 hours.|||Well, maybe it's not a dumb idea. I guess the reason I was worried about it is just my relational background and the idea to never leave transactions open. But I guess that doesn't matter in this case.
So my main concern with the technical feasibility is whether the server will clean up the idle session (in so doing cancel the transaction). Am I going to have to run some cheap discover function ever 5 mins for 8 hours to keep the session alive? Any other way?
Thanks for the feedback Mosha.
|||If server canceling the idle session becomes a problem - you can always bump up the session expiration timeout.
|||I've got this working. Cube processing is broken up into two parts with the connection being dropped in between. Just like you said, it works fine. Just had to bump up the IdleOrphanSessionTimeout setting.
There is one problem, though. If we get a failure on part 2 of the processing (like if there's an invalid dimension key in a fact table) then that failure rolls back the entire transaction, not just the stuff being processed during part 2. Wondering if there's a way around this. I can walk you through reproducing this against Adventure Works:
First, change the query binding of the Reseller_Sales_2004 partition of the Reseller Sales measure group to be:
SELECT [dbo].[FactResellerSales].[ProductKey],-999 as OrderDateKey,[dbo].[FactResellerSales].[DueDateKey],[dbo].[FactResellerSales].[ShipDateKey],[dbo].[FactResellerSales].[ResellerKey], [dbo].[FactResellerSales].[EmployeeKey],[dbo].[FactResellerSales].[PromotionKey],[dbo].[FactResellerSales].[CurrencyKey],[dbo].[FactResellerSales].[SalesTerritoryKey],[dbo].[FactResellerSales].[SalesOrderNumber],[dbo].[FactResellerSales].[SalesOrderLineNumber],[dbo].[FactResellerSales].[RevisionNumber],[dbo].[FactResellerSales].[OrderQuantity],[dbo].[FactResellerSales].[UnitPrice],[dbo].[FactResellerSales].[ExtendedAmount],[dbo].[FactResellerSales].[UnitPriceDiscountPct],[dbo].[FactResellerSales].[DiscountAmount],[dbo].[FactResellerSales].[ProductStandardCost],[dbo].[FactResellerSales].[TotalProductCost],[dbo].[FactResellerSales].[SalesAmount],[dbo].[FactResellerSales].[TaxAmt],[dbo].[FactResellerSales].[Freight],[dbo].[FactResellerSales].[CarrierTrackingNumber],[dbo].[FactResellerSales].[CustomerPONumber]
FROM [dbo].[FactResellerSales]
WHERE OrderDateKey >= '915' AND OrderDateKey <= '1280'
(Note that "-999 as OrderDateKey" will cause a failure.)
Second, pull up the directory contains the 2004 partition for Internet Sales. (On your computer it should be something like c:\program files\Microsoft SQL Server\mssql.2\OLAP\data\Adventure Works DW.0.db\Adventure Works DW.18.cub\Fact Internet Sales 1.18.det\Internet_Sales_2004.16.prt) After part 1 completes, you'll see the number of files in this directory double. After part 2 fails, you'll see those new files disappear as the transaction was rolled back.
Then, compile a little VB.NET app, put a breakpoint on the Sleep command so you can see the number of files in that 2004 partition directory double. Then run it:
Public Shared Function Main(ByVal args As String()) As Integer
Dim sServer As String = "localhost"
Dim sDatabase As String = "Adventure Works DW"
ProcessPart1(sServer, sDatabase)
System.Threading.Thread.Sleep(5000) 'simulate a break in the program
ProcessPart2(sServer, sDatabase)
End Function
Public Shared Sub ProcessPart1(ByVal sServer As String, ByVal sDatabase As String)
Dim server As New Server()
server.Connect(sServer)
Dim database As Database = server.Databases.FindByName(sDatabase)
'save the session ID so we can retrieve it later and connect to this session later
'alternately, it could be saved to a SQL database instead
database.Annotations.SetText("ProcessCubeSessionID", server.SessionID)
database.Update(UpdateOptions.Default, UpdateMode.Update)
server.BeginTransaction()
Try
Dim cube As Cube = database.Cubes.FindByName("Adventure Works")
Dim mg As MeasureGroup = cube.MeasureGroups.FindByName("Internet Sales")
mg.Process(ProcessType.ProcessFull)
'after processing has succeeded, change the orphan session timeout
'to 24 hours so that this session we're about to leave open won't timeout
'before we're done with it
server.ServerProperties("IdleOrphanSessionTimeout").Value = CStr(24 * 60 * 60)
server.Update()
'false means don't end session
'leave the session open so we can connect to it later
'and continue with the open transaction
server.Disconnect(False)
'don't commit the transaction now
'leave it open until we finish the processing later
Catch ex As Exception
server.RollbackTransaction()
'blank out the ProcessCubeSessionID annotation
database.Annotations.SetText("ProcessCubeSessionID", Nothing, True)
database.Update(UpdateOptions.Default, UpdateMode.Update)
server.Disconnect(True)
Throw
End Try
End Sub
Public Shared Sub ProcessPart2(ByVal sServer As String, ByVal sDatabase As String)
Dim server As New Server()
server.Connect(sServer)
Dim database As Database = server.Databases.FindByName(sDatabase)
Dim oldSession As String = database.Annotations.GetText("ProcessCubeSessionID")
If String.IsNullOrEmpty(oldSession) Then
Throw New Exception("Was not able to find the existing session which " _
& "processed the other 95% of the cube and has the open transaction! " _
& "The cube will not be processed.")
End If
server.Disconnect(True)
Try
server.Connect(sServer, oldSession)
Dim cube As Cube = database.Cubes.FindByName("Adventure Works")
Dim mg As MeasureGroup = cube.MeasureGroups.FindByName("Reseller Sales")
mg.Process(ProcessType.ProcessFull) 'BUG?: if this fails, it rolls back the transaction!!!
server.CommitTransaction()
'blank out the ProcessCubeSessionID annotation
database.Annotations.SetText("ProcessCubeSessionID", Nothing, True)
database.Update(UpdateOptions.Default, UpdateMode.Update)
'set this property back to the default now that
'we're done needing orphan sessions to survive
server.ServerProperties("IdleOrphanSessionTimeout").Value = _
server.ServerProperties("IdleOrphanSessionTimeout").DefaultValue
server.Update()
server.Disconnect(True) 'true means to end session
Catch ex As Exception
'there was a problem with the last 5% of the processing...
'so leave the session open so the problem can be corrected and we can
'start with 95% of the processing from earlier in the morning
'being completed already
server.Disconnect(False) 'false means don't end session
Throw
End Try
End Sub
If you search for the line in the code which has a comment saying "BUG?" that's where the processing fails because of our tweak to the partition query. When that processing fails.
|||I'd still like to hear if anyone knows a workaround, but I've posted this as a bug:
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=278000
|||I expect the bug that you filed to be resolved "By Design". The definition of the transaction is that it either fully succeed or fully fails. That's the letter "A" in the ACID definition - A for Atomicy. You have very long transaction, but it doesn't matter - if anything fails within transaction, the entire transaction is rolled back.Process Cube
Analysis Services 2005 is quite different in the way it does processing and manages memory.
First I would recommend you look at the performance guide for some clues on how to improve processing performance. I also not sure what the real concern is: is that your are seeing Analysis Server using less memory during processing? Is that you compare AS2000 and AS2005 and you are seeing slower processing performance?
Just to note, and you might know it alreday, processing of distinct count partitions is quite different from processing of regular partitions. Same goes for querying. Records stored in distinct count partitions as stored sorted. You can even see that for processing such partition Analysis Server will send different SQL query, it will ask data from SQL Server to come in sorted order.
What I am getting at is processing and querying of distinct count partitions is quite different from regular partitions. If you wanted to optimize processing and query performance the first and most important is to make sure your partition your cube along dimension that you get your distinct count for.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
process cube
I am using analysis server 2000. I have a cube on one of the databases which i am trying to process through the Business Intelligence Development studio.
i have specified the Server and the database name in the 'connection managers' through 'New Analysis Services Connection'
I added the 'Analysis Services Processing Task' to the Control flow but get the following error when i double click on it:
"A connection cannot be made. Ensure that the server is running'
Does anyone know what am i doing wrong?
Any help is appreciated.
Unfortunately you will not be able to use AS 2005 tools against AS2000.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Edward,
Are you from Microsoft's SQL Server Team.
Just wanted to know if the answer was official :)
thanks
Process Cube
Hi,
I want to process my cube using Process Data and Process Index instead of the Process Full. However, after configuring the 2 Analysis Services Processing Tasks (one for process data and the other for process index) and were executed sequentially (process data first then process index), I got this error:
Errors in the metadata manager. The process type specified for the CASES cube is not valid since it is not processed
Have I done the right thing?
The reason why I prefer using the Process Data and then Process Index, it's because it is much faster than the latter.
I'm not sure though what Process Index is for. As far as I'm concerned, I have done partitions (I used performance gains reaches ) but I don't remember to explicitly declare indexes.
Please help...
cherriesh
Hi,
In theory what you did to me seems as if it should work. Have you tried doing it outside SSIS? So in managment studio, process data and then index, might be a problem with the cube not actually processing the data step properly.
but here is a description of what the different process types do
http://technet.microsoft.com/en-us/library/ms174774.aspx
Hope that helps
Matt
|||Hi,
The ProcessFull is working fine. I dunno how to do this outside SSIS. Am I right to have Process Data for both Dimension and Cube and to have another Analysis Services Process Task for Process Index for both (Dimension and cube)
cherriesh
|||Hi,
I found a better link:
http://msdn2.microsoft.com/en-us/library/ms345142.aspx
For the dimensions:
A process full will reprocess the dimension completely
Never done a process index on a dimension, but i would assume process data then process index.
Once the dimensions are processed then process the cube
Process Full
Process data and process index (two seperate tasks I am guessing)
You could try
(task 1) Is to process the structure of the cube (processes all the dimensions)
(task 2) Process full of cube
OR
task 1, process data of dims
task 2, process indexes
task 3, process data of cube
task 4, process indexes of cube
OR
task 1, process structure of cube
task 2, process data of cube
task 3, process indexes of cube
Sure there are lots of different ways.
With reference to aleternative ways of processing:
http://technet.microsoft.com/en-us/library/aa337509.aspx
Hope that helps,
Matt
Process could note deliver updates (s) at the publisher
I have created a publisher with row filtering using merge replication .I configured 5-6 subscribers for it.It was working fine for 2 months.But now for two subscribers its showing an error Process could not deliver update (s) at the publisher
Ive checked the job history..its showing
Merge process encountered an unexpected network error. The connection to tublisher 'Publishername' is no longer available
can any one help me?
thanks and regards
Dhanya
Sounds like a connection issue. Does it happen intermittently or persistently? If persistently, you can try to use osql.exe to connect to server "Publishername" from the machine that merge process is launched.
Thanks,
Peng
|||Ive increased the query time out both at the publisher and subscriber...
now i am getting another message "The process is running and is wainting for a response from one of the backend connections"
Process could note deliver updates (s) at the publisher
I have created a publisher with row filtering using merge replication .I configured 5-6 subscribers for it.It was working fine for 2 months.But now for two subscribers its showing an error Process could not deliver update (s) at the publisher
Ive checked the job history..its showing
Merge process encountered an unexpected network error. The connection to tublisher 'Publishername' is no longer available
can any one help me?
thanks and regards
Dhanya
Sounds like a connection issue. Does it happen intermittently or persistently? If persistently, you can try to use osql.exe to connect to server "Publishername" from the machine that merge process is launched.
Thanks,
Peng
|||Ive increased the query time out both at the publisher and subscriber...
now i am getting another message "The process is running and is wainting for a response from one of the backend connections"
process could not execute 'sp_repldone/sp_replcounters.. Another log reader is replicatin
I have read some of your threads on some sites
regarding this. You had suggested that maybe
restarting sql agent/server would resolve this.
I tried that with no luck. I have yet to reboot
the server, because it's in a production
enviroment. I will try that this weekend at
night. However, i was wondering. I get this
error quit a bit on all publications going
through this particular distributor. resources
are fine on both the publisher and distributor.
Is there anything else I can check to resolve
this. reinitializing isn't an option.
Thanks guys..
comb
stop the sql server agent. If this does not solve the problem, stop it,
issue a sp_replflush and then restart it.
If this does not clear the problem, enable logging and post the error
message you get back here.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Combfilter" <adsf@.asdf.com> wrote in message
news:MPG.1bc491484347f0de9896b5@.news.newsreader.co m...
> Hilary, Paul.
> I have read some of your threads on some sites
> regarding this. You had suggested that maybe
> restarting sql agent/server would resolve this.
> I tried that with no luck. I have yet to reboot
> the server, because it's in a production
> enviroment. I will try that this weekend at
> night. However, i was wondering. I get this
> error quit a bit on all publications going
> through this particular distributor. resources
> are fine on both the publisher and distributor.
> Is there anything else I can check to resolve
> this. reinitializing isn't an option.
> Thanks guys..
> comb
process could not execute 'sp_repldone/sp_replcounters'
transaactional replication no longer works with the 'Log Reader' failing to
start and reporting the error message "The process could not execute
'sp_repldone/sp_replcounters' on ServerName".
I have run via Enterprise Manager 'Validate Subscriptions' and
'Re-initialize All Subscriptions' but the Log Reader still fails. Should I
simply re-start the server or run sp_replrestart?
Any assistance would be greatly appreciated.
Kind regards
Robert
the fastest way to fix this is to drop and recreate your subscription.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Robert Paynter" <Robert Paynter@.discussions.microsoft.com> wrote in message
news:AD6E580D-02D3-4722-8AA0-E33979A1F0FD@.microsoft.com...
> I have had to fully restored my publisher database. Since then my
> transaactional replication no longer works with the 'Log Reader' failing
to
> start and reporting the error message "The process could not execute
> 'sp_repldone/sp_replcounters' on ServerName".
> I have run via Enterprise Manager 'Validate Subscriptions' and
> 'Re-initialize All Subscriptions' but the Log Reader still fails. Should
I
> simply re-start the server or run sp_replrestart?
> Any assistance would be greatly appreciated.
> Kind regards
> Robert
|||Hi Hilary
Many thanks for your reply. Just so I am clear in Enterprise Manager I
would delete the existing pulication and create a 'New Publication'
Also could I ask a related question ..... On the destination server the
existing Subcription (Enterprise Manager\Replication\Subscriptions\....)
does not delete when a publication is deleted via Enterprise Manager. In the
past I have investiigated how to delete old subscriptions without success.
Is there a way of doing this of which you are aware?
Again thank you for your kind assistance.
Kind regards
Rob
"Hilary Cotter" wrote:
> the fastest way to fix this is to drop and recreate your subscription.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Robert Paynter" <Robert Paynter@.discussions.microsoft.com> wrote in message
> news:AD6E580D-02D3-4722-8AA0-E33979A1F0FD@.microsoft.com...
> to
> I
>
>
|||Yes, if you script out your publication first it will be easier for you to
rebuild it.
Yes, there are two reasons why you get these orphaned subscribers, well
three actually, the third involves Paul Ibison and a big bottle of Scotch,
but he has sworn me to secrecy on this one.
If you restore an unpublished database onto a published database you can
orphan your subscribers. If you drop a pull subscription through a script on
the publisher only you can orphan them.
I can normally right click on the subscription and then delete it. If not I
manually remove the entry from mssubscription_properties.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Robert Paynter" <RobertPaynter@.discussions.microsoft.com> wrote in message
news:E433EA9D-8239-4256-908C-9CA841D24DF1@.microsoft.com...
> Hi Hilary
> Many thanks for your reply. Just so I am clear in Enterprise Manager I
> would delete the existing pulication and create a 'New Publication'
> Also could I ask a related question ..... On the destination server the
> existing Subcription (Enterprise Manager\Replication\Subscriptions\....)
> does not delete when a publication is deleted via Enterprise Manager. In
the[vbcol=seagreen]
> past I have investiigated how to delete old subscriptions without success.
> Is there a way of doing this of which you are aware?
> Again thank you for your kind assistance.
> Kind regards
> Rob
> "Hilary Cotter" wrote:
message[vbcol=seagreen]
failing[vbcol=seagreen]
Should[vbcol=seagreen]
|||Hi Hilary
Again I would like to thank you for your reply and the time you give to do
this. I shall proceed with fixing the problem as advised and we'll be able
to get the subscription database functioning again.
Again many thanks.
Kind regards
Rob
"Hilary Cotter" wrote:
> Yes, if you script out your publication first it will be easier for you to
> rebuild it.
> Yes, there are two reasons why you get these orphaned subscribers, well
> three actually, the third involves Paul Ibison and a big bottle of Scotch,
> but he has sworn me to secrecy on this one.
> If you restore an unpublished database onto a published database you can
> orphan your subscribers. If you drop a pull subscription through a script on
> the publisher only you can orphan them.
> I can normally right click on the subscription and then delete it. If not I
> manually remove the entry from mssubscription_properties.
>
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Robert Paynter" <RobertPaynter@.discussions.microsoft.com> wrote in message
> news:E433EA9D-8239-4256-908C-9CA841D24DF1@.microsoft.com...
> the
> message
> failing
> Should
>
>
process could not connect to Distributor
MSDE SP4 on the subscriber
Merge Replication
Anonymous Pull Subscriptions
TCP/IP only - removed Named Pipes
Windows Authent only - removed SQL Server
Setup an alias on the subscriber to the publisher.
Running on same network but different workgroups. Can not use Windows
Authentication as moving to Internet model.
Followed
http://support.microsoft.com/default...&Product=sql2k
So I setup the publisher to use an SA user and gave that SA user ownership
and admin rights to the publisher database.
I keep getting this error from the ActiveX component in VB.NET on the
subscriber: The process could not connect to Distributor 'W2KSERVER'.
SQL Server does not exist or access denied.
I had these guys working when it was SP3a, still had named pipes, and using
Windows Authentication passthrough.
Thanks,
Buzz
configure distributornetwork for tcpip, use the fqdn for the
distributoraddress (ie sqlserver.mydomainname.com).
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Buzz" <buzz@.online.nospam> wrote in message
news:EC950BD0-5245-47EB-8320-06D6633633BA@.microsoft.com...
> Sql Server SP4 on the publisher/distributer.
> MSDE SP4 on the subscriber
> Merge Replication
> Anonymous Pull Subscriptions
> TCP/IP only - removed Named Pipes
> Windows Authent only - removed SQL Server
> Setup an alias on the subscriber to the publisher.
> Running on same network but different workgroups. Can not use Windows
> Authentication as moving to Internet model.
> Followed
>
http://support.microsoft.com/default...&Product=sql2k
> So I setup the publisher to use an SA user and gave that SA user ownership
> and admin rights to the publisher database.
> I keep getting this error from the ActiveX component in VB.NET on the
> subscriber: The process could not connect to Distributor 'W2KSERVER'.
> SQL Server does not exist or access denied.
> I had these guys working when it was SP3a, still had named pipes, and
using
> Windows Authentication passthrough.
> Thanks,
> Buzz
>
>
|||Hello,
Name resolution must be working between all machines involved in
replication. Otherwise, you may receive the error message.
I suggest that you perform the following steps to narrow down the issue:
* Drop the subcription and then drop the publication.
* Setup name resolution both ways between the machines. There are several
options for achieving name resolution. You can use CNU aliases
(cliconfg.exe), hosts files, WINS, and/or DNS.
* On the publisher reregister in publisher and subscriber in Enterprise
Manager by netbios name instead of by IP address.
* On the remote server, register the publisher and the subscriber in
Enterprise Manager by netbios name (@.@.servername).
* Recreate the publication and then recreate the subscription.
* Run the Merge agent. Check the issue.
The following information is for your reference:
Potential causes of the "SQL Server does not exist or access denied" error
message
http://support.microsoft.com/?id=328306
324992 How to use replication with SQL Server 2000 Desktop Engine (MSDE
2000)
http://support.microsoft.com/?id=324992
I hope the information is helpful.
Sophie Guo
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
================================================== ===
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
|||All that and it turns out it was a software firewall issue with XP on the
subscriber laptop.
However, now I am stuck with the next issue. Part of replicating over
untrusted domains or the internet means I need to use FTP. On the subscribers
I use the ActiveX merge module.
I specify the FTP address:
_merge.FileTransferType = FILE_TRANSFER_TYPE.FILETRANSFERFTP
If Me.txtFtpAddress.Text.StartsWith("ftp://") = False Then
Me.txtFtpAddress.Text = "ftp://" & Me.txtFtpAddress.Text
End If
Me.txtFtpAddress.Text = Me.txtFtpAddress.Text.Replace("\", "/")
_merge.FTPAddress = Me.txtFtpAddress.Text
But it keeps erroring out saying it cannot find the FTP location specified
at the value I have for _merge.Publisher?
The value for the Ftp is ftp://192.168.1.2/repldata.
The value for the Publisher is server909.
I put an LMHOSTS entry to route the 192.168.1.2 to server909 it works.
But I'm not going to be doing LMHOSTS management on 500+ production laptops
and we plan on using different FTP servers that are different than the SQL
Server.
"Sophie Guo [MSFT]" wrote:
> Hello,
> Name resolution must be working between all machines involved in
> replication. Otherwise, you may receive the error message.
> I suggest that you perform the following steps to narrow down the issue:
> * Drop the subcription and then drop the publication.
> * Setup name resolution both ways between the machines. There are several
> options for achieving name resolution. You can use CNU aliases
> (cliconfg.exe), hosts files, WINS, and/or DNS.
> * On the publisher reregister in publisher and subscriber in Enterprise
> Manager by netbios name instead of by IP address.
> * On the remote server, register the publisher and the subscriber in
> Enterprise Manager by netbios name (@.@.servername).
> * Recreate the publication and then recreate the subscription.
> * Run the Merge agent. Check the issue.
> The following information is for your reference:
> Potential causes of the "SQL Server does not exist or access denied" error
> message
> http://support.microsoft.com/?id=328306
> 324992 How to use replication with SQL Server 2000 Desktop Engine (MSDE
> 2000)
> http://support.microsoft.com/?id=324992
> I hope the information is helpful.
> Sophie Guo
> Microsoft Online Partner Support
> Get Secure! - www.microsoft.com/security
> ================================================== ===
> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ================================================== ===
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
>
|||Not sure if the following will help, but let me try to clarify a few things:
1) Ftp support in SQL replication applies only to transfer of snapshot
files, movement of SQL changes would require a regular connection to SQL
server (Publisher and Distributor).
2) The Ftp* properties on the Merge\Distribution ActiveX control have
actually been deprecated in SQL2000, configuration of FTP snapshot support is
done at the publication level (right-click publication for properties). Once
that is confiigured, you will need to regenerate a new snapshot for the
settings to take effect. And when you start the merge control, all you need
to do is to specify a FileTransferType of FTP. In hind-sight, it was probably
a mistake not to take out the FTP* properties completely.
3) There is still the all important task of getting the merge activeX
control to establish a regular connection to SQL Server from an untrusted
domain in the first place. This typically means that either 1) to expose
whatever port that the distributor and publisher are listening on (default
SQL Server port is 1433) to the outside world (or accessible from the
laptops) and then use the Distributor|PublisherNetwork|Address properties to
point to the exposed IP address\port configuration, or 2) (recommended) to
configure VPN between the laptops and the Publisher\Distributor machines so
regular SQL Server connections can be established. If you decide to use the
VPN approach, there really is no need to configure FTP for the snapshot files
as regular UNC shares should be accessible by the laptops.
HTH
-Raymond
"Buzz" wrote:
[vbcol=seagreen]
> All that and it turns out it was a software firewall issue with XP on the
> subscriber laptop.
> However, now I am stuck with the next issue. Part of replicating over
> untrusted domains or the internet means I need to use FTP. On the subscribers
> I use the ActiveX merge module.
> I specify the FTP address:
> _merge.FileTransferType = FILE_TRANSFER_TYPE.FILETRANSFERFTP
> If Me.txtFtpAddress.Text.StartsWith("ftp://") = False Then
> Me.txtFtpAddress.Text = "ftp://" & Me.txtFtpAddress.Text
> End If
> Me.txtFtpAddress.Text = Me.txtFtpAddress.Text.Replace("\", "/")
> _merge.FTPAddress = Me.txtFtpAddress.Text
> But it keeps erroring out saying it cannot find the FTP location specified
> at the value I have for _merge.Publisher?
> The value for the Ftp is ftp://192.168.1.2/repldata.
> The value for the Publisher is server909.
> I put an LMHOSTS entry to route the 192.168.1.2 to server909 it works.
> But I'm not going to be doing LMHOSTS management on 500+ production laptops
> and we plan on using different FTP servers that are different than the SQL
> Server.
> "Sophie Guo [MSFT]" wrote:
Process constantly polling the tmw_queue table looking for work to do
dev. database.
The job failed because there is a process constantly polling for work
to do.
How can I have the job kill this process, put database in single user
mode and then restore database?
Isabel
Use ALTER DATABASE to set restricted user or single user and the ROLLBACK option of ALTER DATABASE
(see Books Online for syntax).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"confused" <isabelellis@.hotmail.com> wrote in message
news:1133543598.767838.12810@.o13g2000cwo.googlegro ups.com...
>I have created a package that will backup prod. database and restore
> dev. database.
> The job failed because there is a process constantly polling for work
> to do.
> How can I have the job kill this process, put database in single user
> mode and then restore database?
> Isabel
>
Process constantly polling the tmw_queue table looking for work to do
dev. database.
The job failed because there is a process constantly polling for work
to do.
How can I have the job kill this process, put database in single user
mode and then restore database?
IsabelUse ALTER DATABASE to set restricted user or single user and the ROLLBACK op
tion of ALTER DATABASE
(see Books Online for syntax).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"confused" <isabelellis@.hotmail.com> wrote in message
news:1133543598.767838.12810@.o13g2000cwo.googlegroups.com...
>I have created a package that will backup prod. database and restore
> dev. database.
> The job failed because there is a process constantly polling for work
> to do.
> How can I have the job kill this process, put database in single user
> mode and then restore database?
> Isabel
>
Process constantly polling the tmw_queue table looking for work to do
dev. database.
The job failed because there is a process constantly polling for work
to do.
How can I have the job kill this process, put database in single user
mode and then restore database?
IsabelUse ALTER DATABASE to set restricted user or single user and the ROLLBACK option of ALTER DATABASE
(see Books Online for syntax).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"confused" <isabelellis@.hotmail.com> wrote in message
news:1133543598.767838.12810@.o13g2000cwo.googlegroups.com...
>I have created a package that will backup prod. database and restore
> dev. database.
> The job failed because there is a process constantly polling for work
> to do.
> How can I have the job kill this process, put database in single user
> mode and then restore database?
> Isabel
>
Process Capability Chart
Services'
I have tried evrything that I can think of with no luck. I have the bar
chart w/ a normal distribution (bell curve) and need to get the limits added
to the chart but can't get a chart to overlay another one, get a line to
overlay on top of the chart, nor add the limits to the existing chart
without fouling up the first set of bars. Is there any way to overlay a
chart, control the width of the bars, or something else that I have not
tried? Any help would be greatly appreciated!!
Thanks in advance,
SteveHi Steve,
Welcome to use MSDN Managed Newsgroup!
From your descriptions, I understood you would like to know how to get a
link overlay chart for a process capability chart. However, I am not sure
clearly about your scenario. May you send me a sample files, which could
demonstrate your requirements? y direct email address is
v-mingqc@.online.microsoft.com (The ONLINE is inserted to avoid spam in the
PUBLIC newsgroups where we post.), you may send the file to me directly and
I will keep secure.
Sincerely yours,
Michael Cheng
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
=====================================================This posting is provided "AS IS" with no warranties, and confers no rights.
Process cannot connect to distributor error !!!
So far it can subscribe and unsubscribe just fine but when it comes to the syncing process it gets a little weird, here's why.
- If I run the SQL GUI Sync tool, it sinks fine. So that setup is working.
- When I run the C# App it blows up witht the error: "Process cannot connect to Distributor"
- Here's the weird part: When I set a breakpoint on my custom method SyncData (this method sets up the connection info and properties and then calls Syncronize( ) ) and step through the code when it gets to Syncronize( ) it runs perfectly fine, no execeptions.
I thought that maybe the Syncronize( ) method was being called too fast before the connection properties got to fully setup and connect, so I added a Thread.Sleep( ) method for 10 secs after each connection call and just before calling Syncronize( ). It still didn't work. My class is a static class by design, however, I changed it into an ordinary class and then placed the connection info in the constructor hoping it would do it's connection when the object gets initialized, that didn't work either.
If someone could please help me out with this I would greatly appreciate it.
Here's my code:
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.SqlServer.Replication;
using Microsoft.SqlServer.Replication.BusinessLogicSupport;
using Microsoft.SqlServer.Management.Common;
using System.Windows.Forms;
using System.Threading;
namespace Emds.Briefcase.BriefcaseSubscriber.BLL
{
public class SyncDataClass
{
#region Members
private static string m_statusMessage = string.Empty;
private static byte m_percentComplete;
public delegate void StatusTextChangeHandler(int percent, string status);
public static event StatusTextChangeHandler OnStatusChange;
#endregion Members
#region Methods
public static string SyncData()
{
//Delay();
// Define the server, publication, and database names.
string publicationName = "Chart";
string publisherName = @."JSMITH\SQL2005";
string subscriberName = @."JDOE\SQL2005";
string subscriptionDbName = "DataSubscriber";
string publicationDbName = "DataPublisher";
string message = string.Empty;
// Create a connection to the Subscriber.
ServerConnection conn = new ServerConnection(subscriberName);
MergePullSubscription subscription;
try
{
// Connect to the Subscriber.
conn.Connect();
// Delay();
// Define the pull subscription.
subscription = new MergePullSubscription();
subscription.ConnectionContext = conn;
//Delay();
subscription.DistributorSecurity.WindowsAuthentication = false;
subscription.DistributorSecurity.SqlStandardLogin = "sa";
subscription.DistributorSecurity.SqlStandardPassword = "russell";
// Delay();
subscription.PublisherName = publisherName;
subscription.PublicationDBName = publicationDbName;
subscription.PublicationName = publicationName;
subscription.PublisherSecurity.SecurityMode = ReplicationSecurityMode.SqlStandard;
subscription.PublisherSecurity.SqlStandardLogin = "sa";
subscription.PublisherSecurity.SqlStandardPassword = "russell";
// Delay();
subscription.DatabaseName = subscriptionDbName;
subscription.SubscriberSecurity.WindowsAuthentication = false;
subscription.SubscriberSecurity.SqlStandardLogin = "sa";
subscription.SubscriberSecurity.SqlStandardPassword = "russell";
// Delay();
// If the pull subscription exists, then start the synchronization.
if (subscription.LoadProperties())
{
// Check that we have enough metadata to start the agent.
if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null)
{
// Synchronously start the Merge Agent for the subscription.
subscription.SynchronizationAgent.Status += new AgentCore.StatusEventHandler(SynchronizationAgent_Status);
// Delay();
subscription.SynchronizationAgent.Synchronize();
message = "Data Syncronization is a success!";
}
else
{
throw new ApplicationException("There is insufficent metadata to " +
"synchronize the subscription. Recreate the subscription with " +
"the agent job or supply the required agent properties at run time.");
}
}
else
{
// Do something here if the pull subscription does not exist.
throw new ApplicationException(String.Format(
"A subscription to '{0}' does not exist on {1}",
publicationName, subscriberName));
}
}
catch (Exception ex)
{
// Implement appropriate error handling here.
throw new ApplicationException("The subscription could not be " +
"synchronized. Verify that the subscription has " +
"been defined correctly.", ex);
}
finally
{
conn.Disconnect();
}
return message;
}
static void SynchronizationAgent_Status(object sender, StatusEventArgs e)
{
m_percentComplete = e.PercentCompleted;
m_statusMessage = e.Message;
//Fire custom event
if (OnStatusChange != null)
{
OnStatusChange(m_percentComplete, m_statusMessage);
}
}
#endregion Methods
}
}Moved to SQL Server Replication forum.|||
You may want to try setting the distributor security on the Agent object directly before calling subscription.SynchronizationAgent.Synchronize() like so:
subscription.SynchronizationAgent.DistributorSecurityMode = SecurityMode.Standard
subscription.SynchronizationAgent.DistributorLogin = "sa"
subscription.SynchronizationAgent.DistributorPassword = "russell"
Hope that helps,
-Raymond
|||Raymond,Thanks for your response. If you look at my code listing I think you will see that I am already doing that, unless I missunderstood what you were suggesting.
Thanks again,
Chris|||
There is actually a subtle difference: I believe your code is setting the security info on the subscription object, whereas my code is setting it on the SynchronizationAgent object.
-Raymond
|||Ok, I will give it a shot. Thank you again!!|||That was it!!! Thank you very, very much!!!!!Take care!!!
Process Blocking Problem
We have a problem with process blocking. At some points the T-SQL statement runs restricting all our users from accessing the DB.
The statement is:
SET FMTONLY ON
exec sp_execute 1
SET FMTONLY OFF
Only when it finishes running, then users are allowed to access the db again.
It is triggered automatically. Does any know what condition on the server is causing this statements to run and what can be done to prevent it. Thanks so much for your help.Look at
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql7/html/sqlquerproc.asp
It explains what sp_execute does.
Do you have a lot of dynamic sql or embedded sql rather than SPs?|||Also look at
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsqlmag2k/html/adoapp.asp|||What I found out from the Microsoft website that sp_execute is a command that runs a prepared statement using the sp_prepare command.
So what happened is I've watched the SQL for the condition to repeat but it did not. However what I've noticed that our of nowhere at time sp_prepare runs without any parameters, causing the CPU utilization go up to high 90%.
What is causing this sp_prepate to run, as it done automatically.
Thanks for your help.
Process blocking itself since SP4
Since installing SP4 we seem to encounter issues where a process seems to block itself. Has anyone seen this issue. Is this a feature?
Thanks,
Dave
Please refer to the following for more details. This may be the reason
The original posting can be found in many places, for example: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/23496/SP4-reports-queries-blocking-on-themselves
There is also a KB article on this: http://support.microsoft.com/default.aspx?scid=KB;EN-US;906344
Thanks
Process blocking itself
process is blocked by itself and doesn't finish, even after waiting 20
hours for it.
The table has 40 million records and im running something like :
UPDATE table SET field2 = db.myfunction(field1)
The function isnt doing anthing with tables, it justs does some string
manipulation on the parameter string.
Someone said it might be dur to parallelism (multiple processes
running at the same time causing the process to lock up / block
itself), but even if i try :
UPDATE table SET field2 = db.myfunction(field1) OPTION (MAXDOP 1)
it doesnt make any difference.
Hope someone can help, thanks,
PaulHow do you know that the process is "blocked by itself"? Are you seeing
this in sysprocesses, or are you guessing? Usually a spid self-blocking, in
and of itself, is not a cause for concern, it is just a change in the way
sysprocesses reported waits (I think it started working this way in 2000
SP4). Please see the following KB article for more information:
http://support.microsoft.com/kb/906344
"puma75" <paulroskilly@.gmail.com> wrote in message
news:1191868345.918936.134610@.57g2000hsv.googlegroups.com...
> Im ahving a problem running a sql server 2000 query, in that the
> process is blocked by itself and doesn't finish, even after waiting 20
> hours for it.
> The table has 40 million records and im running something like :
> UPDATE table SET field2 = db.myfunction(field1)
> The function isnt doing anthing with tables, it justs does some string
> manipulation on the parameter string.
> Someone said it might be dur to parallelism (multiple processes
> running at the same time causing the process to lock up / block
> itself), but even if i try :
> UPDATE table SET field2 = db.myfunction(field1) OPTION (MAXDOP 1)
> it doesnt make any difference.
> Hope someone can help, thanks,
> Paul
>|||On 8 Oct, 19:42, "Aaron Bertrand [SQL Server MVP]"
<ten...@.dnartreb.noraa> wrote:
> How do you know that the process is "blocked by itself"? Are you seeing
> this in sysprocesses, or are you guessing? Usually a spid self-blocking, in
> and of itself, is not a cause for concern, it is just a change in the way
> sysprocesses reported waits (I think it started working this way in 2000
> SP4). Please see the following KB article for more information:
> http://support.microsoft.com/kb/906344
> "puma75" <paulroski...@.gmail.com> wrote in message
> news:1191868345.918936.134610@.57g2000hsv.googlegroups.com...
>
> > Im ahving a problem running a sql server 2000 query, in that the
> > process is blocked by itself and doesn't finish, even after waiting 20
> > hours for it.
> > The table has 40 million records and im running something like :
> > UPDATE table SET field2 = db.myfunction(field1)
> > The function isnt doing anthing with tables, it justs does some string
> > manipulation on the parameter string.
> > Someone said it might be dur to parallelism (multiple processes
> > running at the same time causing the process to lock up / block
> > itself), but even if i try :
> > UPDATE table SET field2 = db.myfunction(field1) OPTION (MAXDOP 1)
> > it doesnt make any difference.
> > Hope someone can help, thanks,
> > Paul- Hide quoted text -
> - Show quoted text -
Thanks.
In Enterprise Manager under Management > Current Activity > Locks/
Process Id it says "spid 73 (blocked by 73)"
It just seems very slow that it hasn't finished after 20 hours and it
didnt seem to be doing anything, little cpu usage, so thinking that it
was deadlocked seemed like a good explanation, must maybe not.|||> It just seems very slow that it hasn't finished after 20 hours and it
> didnt seem to be doing anything, little cpu usage, so thinking that it
> was deadlocked seemed like a good explanation, must maybe not.
It's certainly not deadlocking because otherwise SQL Server would have
detected it and terminated one of the deadlock participants.
Perhaps, the UPDATE is not the only SQL statement, and there is still
runaway parallelism going on. I'd check the query plan to be see whether
there is any prallelism.
Linchi
"puma75" wrote:
> On 8 Oct, 19:42, "Aaron Bertrand [SQL Server MVP]"
> <ten...@.dnartreb.noraa> wrote:
> > How do you know that the process is "blocked by itself"? Are you seeing
> > this in sysprocesses, or are you guessing? Usually a spid self-blocking, in
> > and of itself, is not a cause for concern, it is just a change in the way
> > sysprocesses reported waits (I think it started working this way in 2000
> > SP4). Please see the following KB article for more information:
> >
> > http://support.microsoft.com/kb/906344
> >
> > "puma75" <paulroski...@.gmail.com> wrote in message
> >
> > news:1191868345.918936.134610@.57g2000hsv.googlegroups.com...
> >
> >
> >
> > > Im ahving a problem running a sql server 2000 query, in that the
> > > process is blocked by itself and doesn't finish, even after waiting 20
> > > hours for it.
> >
> > > The table has 40 million records and im running something like :
> >
> > > UPDATE table SET field2 = db.myfunction(field1)
> >
> > > The function isnt doing anthing with tables, it justs does some string
> > > manipulation on the parameter string.
> >
> > > Someone said it might be dur to parallelism (multiple processes
> > > running at the same time causing the process to lock up / block
> > > itself), but even if i try :
> >
> > > UPDATE table SET field2 = db.myfunction(field1) OPTION (MAXDOP 1)
> >
> > > it doesnt make any difference.
> >
> > > Hope someone can help, thanks,
> >
> > > Paul- Hide quoted text -
> >
> > - Show quoted text -
> Thanks.
> In Enterprise Manager under Management > Current Activity > Locks/
> Process Id it says "spid 73 (blocked by 73)"
> It just seems very slow that it hasn't finished after 20 hours and it
> didnt seem to be doing anything, little cpu usage, so thinking that it
> was deadlocked seemed like a good explanation, must maybe not.
>
>|||On Oct 8, 1:32 pm, puma75 <paulroski...@.gmail.com> wrote:
> Im ahving a problem running a sql server 2000 query, in that the
> process is blocked by itself and doesn't finish, even after waiting 20
> hours for it.
> The table has 40 million records and im running something like :
> UPDATE table SET field2 = db.myfunction(field1)
> The function isnt doing anthing with tables, it justs does some string
> manipulation on the parameter string.
> Someone said it might be dur to parallelism (multiple processes
> running at the same time causing the process to lock up / block
> itself), but even if i try :
> UPDATE table SET field2 = db.myfunction(field1) OPTION (MAXDOP 1)
> it doesnt make any difference.
> Hope someone can help, thanks,
> Paul
a few suggestions:
- check if you have triggers on that table.
- try to do your updates in batches, of, say, 10K rows.|||Aaron Bertrand [SQL Server MVP] wrote:
> > How do you know that the process is "blocked by itself"? Are you
> > seeing this in sysprocesses, or are you guessing? Usually a spid
> > self-blocking, in and of itself, is not a cause for concern, it is
> > just a change in the way sysprocesses reported waits (I think it
> > started working this way in 2000 SP4). Please see the following KB
> > article for more information:
> >
> > http://support.microsoft.com/kb/906344
puma75 wrote:
> In Enterprise Manager under Management > Current Activity > Locks/
> Process Id it says "spid 73 (blocked by 73)"
> It just seems very slow that it hasn't finished after 20 hours and it
> didnt seem to be doing anything, little cpu usage, so thinking that it
> was deadlocked seemed like a good explanation, must maybe not.
In this case, I think it's exactly what Aaron said: you are seing that
the process is waiting for I/O or network latches (belonging to that
process). I bet the query is accessing a lot of data and the HDD led is
light-up almost contiuously.
Try to rephrase the query (to optimize it) or try adding appropriate
indexes (if necessary). Also, try updating the statistics before
running the query.
--
Razvan Socol
SQL Server MVP
Process blocking DataBase [BULK-OP-DB]
I have a problem wiht a process that I thing is used for auto growth file
database (is automatic 10%)
This problem not finish and block all connection to database:
Process Info:
ID: xxx
Block type: DB
Mode: NULL
State: GRANT
Own: Xact
Resource: [BULK-OP-DB] and [BULK-OP-LOG]
What can I do for resolve it?
Thanks in advanced
Hi
Don't use autogrow. Rather Manage the Db's correctly and grow them manually,
in a controlled manner, when usage is least.
The problem is that the space created by the grow is not available to any
process until the grow is complete. Those pages are locked so it looks like
the auto grow is blocking.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Mario Barro" wrote:
> Hil all,
> I have a problem wiht a process that I thing is used for auto growth file
> database (is automatic 10%)
> This problem not finish and block all connection to database:
> Process Info:
> ID: xxx
> Block type: DB
> Mode: NULL
> State: GRANT
> Own: Xact
> Resource: [BULK-OP-DB] and [BULK-OP-LOG]
> What can I do for resolve it?
> Thanks in advanced
>
>
|||Thank Mike;
I grow file data manually and problem has solved.
I havent problems wiht others database and automatic grow, but this
database has 24 GB.
Regards
Mario Barro
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> escribi en el mensaje
news:BBCBB823-861E-41D3-A5AD-566E3FF4A041@.microsoft.com...
> Hi
> Don't use autogrow. Rather Manage the Db's correctly and grow them
manually,
> in a controlled manner, when usage is least.
> The problem is that the space created by the grow is not available to any
> process until the grow is complete. Those pages are locked so it looks
like[vbcol=seagreen]
> the auto grow is blocking.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Mario Barro" wrote:
file[vbcol=seagreen]
|||Yes, and 10% of that is 2.4 GB! That will take some, not a lot, but some
time. I agree with Mike in that you should follow best practices and, as
the DBA, make sure the space is pre-allocated before needed. However, one
of SQL Server's strengths is its ease of administration.
As an alternative, if you know what your approximate BULK LOAD operations
size per load is, you could configure AUTOGROW at a fixed SIZE instead of a
fixed PERCENT. Something like 128, 256, or 512 MB. You'd still get a chunk
of space and would stall, but these would be on the order of 5 to 20 times
smaller, thus 1/5 th to 1/20 the duration, than the 10% AUTOGROW.
Hope this helps.
Sincerely,
Anthony Thomas
"Mario Barro" <newsss@.QUITAMEya.com> wrote in message
news:uyXpGzbYFHA.584@.TK2MSFTNGP15.phx.gbl...
Thank Mike;
I grow file data manually and problem has solved.
I havent problems wiht others database and automatic grow, but this
database has 24 GB.
Regards
Mario Barro
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> escribi en el mensaje
news:BBCBB823-861E-41D3-A5AD-566E3FF4A041@.microsoft.com...
> Hi
> Don't use autogrow. Rather Manage the Db's correctly and grow them
manually,
> in a controlled manner, when usage is least.
> The problem is that the space created by the grow is not available to any
> process until the grow is complete. Those pages are locked so it looks
like[vbcol=seagreen]
> the auto grow is blocking.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Mario Barro" wrote:
file[vbcol=seagreen]
Process blocking DataBase [BULK-OP-DB]
I have a problem wiht a process that I thing is used for auto growth file
database (is automatic 10%)
This problem not finish and block all connection to database:
Process Info:
ID: xxx
Block type: DB
Mode: NULL
State: GRANT
Own: Xact
Resource: [BULK-OP-DB] and [BULK-OP-LOG]
What can I do for resolve it?
Thanks in advancedHi
Don't use autogrow. Rather Manage the Db's correctly and grow them manually,
in a controlled manner, when usage is least.
The problem is that the space created by the grow is not available to any
process until the grow is complete. Those pages are locked so it looks like
the auto grow is blocking.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Mario Barro" wrote:
> Hil all,
> I have a problem wiht a process that I thing is used for auto growth file
> database (is automatic 10%)
> This problem not finish and block all connection to database:
> Process Info:
> ID: xxx
> Block type: DB
> Mode: NULL
> State: GRANT
> Own: Xact
> Resource: [BULK-OP-DB] and [BULK-OP-LOG]
> What can I do for resolve it?
> Thanks in advanced
>
>|||Thank Mike;
I grow file data manually and problem has solved.
I havent problems wiht others database and automatic grow, but this
database has 24 GB.
Regards
Mario Barro
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> escribi en el mensaje
news:BBCBB823-861E-41D3-A5AD-566E3FF4A041@.microsoft.com...
> Hi
> Don't use autogrow. Rather Manage the Db's correctly and grow them
manually,
> in a controlled manner, when usage is least.
> The problem is that the space created by the grow is not available to any
> process until the grow is complete. Those pages are locked so it looks
like[vbcol=seagreen]
> the auto grow is blocking.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Mario Barro" wrote:
>
file[vbcol=seagreen]|||Yes, and 10% of that is 2.4 GB! That will take some, not a lot, but some
time. I agree with Mike in that you should follow best practices and, as
the DBA, make sure the space is pre-allocated before needed. However, one
of SQL Server's strengths is its ease of administration.
As an alternative, if you know what your approximate BULK LOAD operations
size per load is, you could configure AUTOGROW at a fixed SIZE instead of a
fixed PERCENT. Something like 128, 256, or 512 MB. You'd still get a chunk
of space and would stall, but these would be on the order of 5 to 20 times
smaller, thus 1/5 th to 1/20 the duration, than the 10% AUTOGROW.
Hope this helps.
Sincerely,
Anthony Thomas
"Mario Barro" <newsss@.QUITAMEya.com> wrote in message
news:uyXpGzbYFHA.584@.TK2MSFTNGP15.phx.gbl...
Thank Mike;
I grow file data manually and problem has solved.
I havent problems wiht others database and automatic grow, but this
database has 24 GB.
Regards
Mario Barro
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> escribi en el mensaje
news:BBCBB823-861E-41D3-A5AD-566E3FF4A041@.microsoft.com...
> Hi
> Don't use autogrow. Rather Manage the Db's correctly and grow them
manually,
> in a controlled manner, when usage is least.
> The problem is that the space created by the grow is not available to any
> process until the grow is complete. Those pages are locked so it looks
like[vbcol=seagreen]
> the auto grow is blocking.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Mario Barro" wrote:
>
file[vbcol=seagreen]
Process blocking DataBase [BULK-OP-DB]
I have a problem wiht a process that I thing is used for auto growth file
database (is automatic 10%)
This problem not finish and block all connection to database:
Process Info:
ID: xxx
Block type: DB
Mode: NULL
State: GRANT
Own: Xact
Resource: [BULK-OP-DB] and [BULK-OP-LOG]
What can I do for resolve it?
Thanks in advancedHi
Don't use autogrow. Rather Manage the Db's correctly and grow them manually,
in a controlled manner, when usage is least.
The problem is that the space created by the grow is not available to any
process until the grow is complete. Those pages are locked so it looks like
the auto grow is blocking.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Mario Barro" wrote:
> Hil all,
> I have a problem wiht a process that I thing is used for auto growth file
> database (is automatic 10%)
> This problem not finish and block all connection to database:
> Process Info:
> ID: xxx
> Block type: DB
> Mode: NULL
> State: GRANT
> Own: Xact
> Resource: [BULK-OP-DB] and [BULK-OP-LOG]
> What can I do for resolve it?
> Thanks in advanced
>
>|||Thank Mike;
I grow file data manually and problem has solved.
I haven´t problems wiht others database and automatic grow, but this
database has 24 GB.
Regards
Mario Barro
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> escribió en el mensaje
news:BBCBB823-861E-41D3-A5AD-566E3FF4A041@.microsoft.com...
> Hi
> Don't use autogrow. Rather Manage the Db's correctly and grow them
manually,
> in a controlled manner, when usage is least.
> The problem is that the space created by the grow is not available to any
> process until the grow is complete. Those pages are locked so it looks
like
> the auto grow is blocking.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Mario Barro" wrote:
> > Hil all,
> >
> > I have a problem wiht a process that I thing is used for auto growth
file
> > database (is automatic 10%)
> >
> > This problem not finish and block all connection to database:
> >
> > Process Info:
> >
> > ID: xxx
> > Block type: DB
> > Mode: NULL
> > State: GRANT
> > Own: Xact
> > Resource: [BULK-OP-DB] and [BULK-OP-LOG]
> >
> > What can I do for resolve it?
> >
> > Thanks in advanced
> >
> >
> >|||Yes, and 10% of that is 2.4 GB! That will take some, not a lot, but some
time. I agree with Mike in that you should follow best practices and, as
the DBA, make sure the space is pre-allocated before needed. However, one
of SQL Server's strengths is its ease of administration.
As an alternative, if you know what your approximate BULK LOAD operations
size per load is, you could configure AUTOGROW at a fixed SIZE instead of a
fixed PERCENT. Something like 128, 256, or 512 MB. You'd still get a chunk
of space and would stall, but these would be on the order of 5 to 20 times
smaller, thus 1/5 th to 1/20 the duration, than the 10% AUTOGROW.
Hope this helps.
Sincerely,
Anthony Thomas
"Mario Barro" <newsss@.QUITAMEya.com> wrote in message
news:uyXpGzbYFHA.584@.TK2MSFTNGP15.phx.gbl...
Thank Mike;
I grow file data manually and problem has solved.
I haven´t problems wiht others database and automatic grow, but this
database has 24 GB.
Regards
Mario Barro
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> escribió en el mensaje
news:BBCBB823-861E-41D3-A5AD-566E3FF4A041@.microsoft.com...
> Hi
> Don't use autogrow. Rather Manage the Db's correctly and grow them
manually,
> in a controlled manner, when usage is least.
> The problem is that the space created by the grow is not available to any
> process until the grow is complete. Those pages are locked so it looks
like
> the auto grow is blocking.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Mario Barro" wrote:
> > Hil all,
> >
> > I have a problem wiht a process that I thing is used for auto growth
file
> > database (is automatic 10%)
> >
> > This problem not finish and block all connection to database:
> >
> > Process Info:
> >
> > ID: xxx
> > Block type: DB
> > Mode: NULL
> > State: GRANT
> > Own: Xact
> > Resource: [BULK-OP-DB] and [BULK-OP-LOG]
> >
> > What can I do for resolve it?
> >
> > Thanks in advanced
> >
> >
> >
process block itself
Hi to everybody,
i have this problem.
sometimes a process in my SQL Server 2005 (upd 2) is blocked from itself. If I find in monitor it wait for a lock_M_Sch_M on a temporary table as you can see from
select * from sys.dm_tran_locks where request_session_id=51 and request_status <> 'GRANT'
resorce type : OBJECT
resource id : 218899226
request mode : Sch-M
request status WAIT
request_session id : 51
request life time : 33554432
or from
SELECT * FROM sys.dm_os_waiting_tasks WHERE SESSION_ID=51
waiting task address : 0x0000000000C2F198
session id : 51
wait type : LCK_M_SCH_M
resource address : 0x0000000201C71300
blocking task address : 0x0000000000C2F198
blocking session id = 51
resource description objectlock lockPartition=0 objid=218899226 subresource=FULL dbid=2 id=lock80d04900 mode=Sch-S associatedObjectId=218899226
I resolve this problem only with restart of sql server . that clear tempdb and eliminate this process
I think is not a problem about latch present from sql server 2000 sp4 (I read some document abount this problem)
So I haven't idea how to resolve this problem and how to kill this kind of process without to restart sql server
thanks in advance
Luca
Is this a job that runs a DTS package? If so, make sure you didn't port the dts jobs to a different server. It is possible that you have two of the same jobs running the same package at the same time but from different servers.|||
No, it isn't a DTS package but I have found where is problem...
I use a procedure like this to retry a block of istructions when i have an error
Procedure 2
.....
create table #tmp.....
declare @.n int
declare @.maxcicle int
declare @.cicle int
set @.n = 100
set @.maxcicle = 10
set @.cicle = 1
while @.n = 100
begin
begin try
begin transaction
--
-- instructions that use #tmp
--
commit tran
set @.n = 0
end try
begin catch
set @.cicle = @.cicle + 1
set @.n = 100
rollback transaction
if @.clicle > @.maxcicle
begin
raise error
end
waitfor delay '00:00:10'
end catch
end
end
I call this procedure from Procedure 1
procedure 1
begin try
begin transaction
exec procedure 2
commit transaction
end try
begin catch
rollback transaction
end catch
end
when I have an error in Procedure 2 procedure make a rollback of all instruction from begin transaction in procedure 1 so also temporary table was destroy and when try the cicle again procedure 2 can't find temporary table and block process from itself becouse it wait from a resource and resource now don't exists (and SQL Server think that it's in use from that process). SQL Server can't resolve this problem and process remain block from itself, wait from a resource that is in use for another process ( but for SQL Server becouse process is the same) a resource that now doesn't exists.
Only to restart SQL Server service process was destroy becouse tempdb restart.
Thanks
Luca
|||i had this problem and traced it to the processing of the table variables created with my procs. when i converted them to temp tables the problem resolved. under certain circumstances tempdb will be used for a table variable. the resource contention actually occurs with tempdb. I also turned on trace flag 1118 and increased the number of tempdb files to avoid my tempdb allocations from being pulled from the same 4GB address space in a tempdb file. i believe this is a bug in sql 2005 but haven't confirmed this yet.|||
Table variables are materialized in tempdb the same way that temp tables are. That's one of the common myths, misconceptions about table variables - that they are a memory only structure - which led way too many people to assume they are faster. It's just not the case. You'd decide one or the other based on scope needs, number of rows, weighing out the need for statistics vs potential recompiles, etc. Too often, table variables become the first choice based on thinking they are "better" or faster which is not true.
-Sue