Trace Flag 610

I want to post a comment from Marcel van der Holst from Microsoft today. It's about the trace flag 610 and it's impact on the size of the transaction log.

Marcel van der Holst is a developer in the SQL Server Storage Engine Team. During SQL Server 2008 he was one of the key developers that implemented data compression in the storage engine. Besides data compression, Marcel was also responsible for a number of small features to B-Trees, Heaps and LOBs. So have fun with his short overview of TF610!

TF610

TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to create a lot of transaction log. Initially the idea was to automatically do this in the engine, but we ran into a bunch of issues and thus we put it under a traceflag.

Here are some of the things to be aware of:
1) When the transaction commits, it needs to write all pages that were minimally logged to disk. Even though pages will get written asynchronously when they are full, it is possible that not all pages can be written in time in case of a slow I/O subsystem. This can make the operation slower than full logging, as for full logging, SQL only needs to write the commit log record and then it is done.
2) There is a bug in SQL2008 where the transaction log grows very big (due to log reservation), even though the log is never used. This is due to how rollback works for minimally logged pages. This bug was fixed in SQL2008R2 and is fixed in one of the CUs for SQL2008 (I forgot which one)
3) If you have a large bufferpool (hundreds of GBs or more) and the I/O subsystem cannot keep up, the commit could take very long as well.
4) Don't make transactions to big. Even though the features is build to be used when inserting a large number of rows, it does not work very well if you make the transactions too big. For instance, inserting about 10000 rows at a time (about 3-5 GB max). We had one customers inserting 1TB in a single transaction, which caused some issues.
5) The feature is NOT designed to make inserts go faster. It is mainly written to reduce the size of the transaction log. There are cases where perf might be better, but there are cases as well, where it is slower as well.

My recommendation is to test this feature in a test environment that has similar I/O characteristics than the production system and see if it helps.
I would only use it if you have fast enough I/O to keep up with creation of the minimally logged pages.

However, in the cases where you have a fast enough I/O subsystem, it is a great way to reduce the size of the transaction log.

  • 4 stars
    joe frp
    Comment from: joe frp
    2011-01-19 @ 00:26:13

    hi, how are you? have you tested this feature yet?

  • Comment from: cmu
    2011-01-19 @ 08:30:37

    Hi Joe, I have not tested it by myself. But I hope it can be helpful for you if you are struggling with large logfiles.

  • joe frp
    Comment from: joe frp
    2011-01-27 @ 11:34:15

    ok. thanx for your answer chris. anyone tried it? please leaeve a comment to this page. tq.

  • llaneflaweb1972
    Comment from: llaneflaweb1972
    2011-12-20 @ 01:15:32

    Positive site, where did u come up with the information on this posting?I have read a few of the articles on your website now, and I really like your style. Thanks a million and please keep up the effective work. =-=

  • Comment from: cmu
    2011-12-20 @ 08:20:51

    Marcel van der Holst was posting this information in a closed mailing list for MVPs. Marcel allowed us to post it in our blogs and I think it can be helpful for others.

Leave a comment

You must be logged in to leave a comment. Log in now!

If you have no account yet, you can register now...
(It only takes a few seconds!)