Further to my blog post I would like to thank you all for attending my recent presentation on SQL Server Dump Analysis for PASS DBA Virtual Chapter on Jan 14th 2015.
The presentation was well received with a good number of attendees joining the session. Post presentation, I was asked to present extended multiple deep-dive sessions which I will be planning for.
For those, who either could not attend or want to see it again, I have embedded the presentation materials in this post.
Here are the slides of the presentation:
Read the rest of this entry »
I am excited to be speaking at PASS DBA Virtual Chapter on Wednesday Jan 14th. Here is the topic, abstract and schedule of the session:
SQL SERVER DUMP ANALYSIS (SPONSORED BY DELL SOFTWARE)
While dealing with SQL Server administration, you might have come across scenarios when a session terminates abruptly, SQL Server instance crashes, SQL Server cluster fails over without a graceful message. Most often, it leaves a dump file behind which is unfortunately not in a human readable format. In this one hour session, you will learn the basics of a dump file, the tools available for reading thru a dump, the different types of dumps and demos on various debugger commands, how to analyze a dump file to establish the root cause.
Date: January 14th 12:00 noon Mountain time (click here to see it in your local time)
If you are a SQL Server DBA willing to dig into dump files which you can’t read just like you read the ERRORLOG, come and join us on this Wednesday Jan 14th to find the exciting stuff. This is a FREE session, you just need to be a member of PASS in order to register. You also stand a chance to win a $100 gift card from Amazon once you register for the session. The winner will be declared through a draw.
Registration: You must register if you want to attend. You can register here
We already have a number of registrants. Hurry up and do share and tweet about this.
See you on Wednesday!
Recently I have come across one challenging issue and thought of sharing with you all.
We have a SQL Server instance with replication setup. This replication setup is little different than normal. Customer is using script based replication setup wherein for each table they have separate publisher and subscriber. There were around 100 such publishers.
The issue started when log reader agent at subscriber failed with the error message below:
"Replicated transactions are waiting for next Log backup or for mirroring partner to catch up."
Read the rest of this entry »
Corruption in your production database!! Always sounds scary isn’t it?
How about corruption in In-Memory OLTP table?? It’s even scarier…..
You may have situation where you have created an In-Memory database which contains both disk based and memory optimized tables. What will happen if you have corruption in one of the memory optimized tables? You will find the database in Restore_Pending state.
Easiest way to come out from this situation to restore from backup. Unfortunately in this situation we do not have backup so somehow we need to bring the database online. Remember, we cannot run DBCC CHECKDB on Memory optimized tables as well. Stuck, isn’t it? Read the rest of this entry »
While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.
name minimum maximum config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask -2147483648 2147483647 -1066394617 -1066394617
This output was from a SQL Server 2008 R2 instance running on Windows Server 2008 R2.
Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure. Read the rest of this entry »
When we setup this blog, we didn’t know that this very first post will be so popular that it will alone have one-third of the total views on the site, that it will rank# 1 in the major search engines (as of today), see here and here. And we didn’t know that it has been **two wonderful years**, it just feels like yesterday that we setup this blog. Time flies! J
Yes, we are celebrating two years of http://sqlactions.com today, April 15th 2014. On behalf of the authors of sqlactions Manish, Karthic and Prashant, we would like to thank all our blog readers for the continued support. The increased no. of site views and followers, the technical discussion thru blog comments and some readers contacting us for their urgent productions issues… all have been phenomenal in the past two years.
As we go on this journey of SQL Server, we promise to publish the same quality blogs, covering more topics, latest versions and more of video blogs. If there is any special topic you would like us to blog about, please feel free to contact us.
To cherish the moment, here are top 10 blogs on sqlactions.com which were most viewed and most popular so far.
- Collection and Reporting of Perfmon data for SQL Server “Capacity Planning” and “Trend Analysis”
- Automated Reports for SQL Server Perfmon data
- How to create custom schedule for SQL Server Agent Job
- A read operation on a large object failed while sending data to the client
- SQL Agent Job reports error “SQLServerAgent is not currently running ” Though Agent Service is running
- Part-1 How to build SQL Server Failover Cluster Lab on Windows 8 – Also see part-2 and part-3
- DBCC MEMORYSTATUS : How is Stolen Potential calculated
- Latch Timeout: To worry or not to?
- Simple backup Strategy for Distribution database
- [Part-1]Let’s drill why you “Cannot generate SSPI context” – Also see part-2
Today I would like to discuss about one of the new enhancements done in SQL Server 2012 called LogPool.
Before we get into the details, let me try to explain how does the log flush occur in SQL Server. When a record is inserted or modified in a database, it is first copied into the log buffer in memory, these log buffers are like log blocks of different size (512 B to 60 KB). When this buffer is full or a commit is issued these log blocks are flushed to the transaction log. Read the rest of this entry »