Latest Comments

  • *****
    Comment from: Frank Kalis
    2011-08-09 @ 07:53:17

    Brilliant! I like that developer's humour... :-)

  • Christoph Ingenhaag
    Comment from: Christoph Ingenhaag
    2011-02-24 @ 19:25:46

    An indexed view is another choice. With 1100000 rows in MyTable a select makes 957 logical reads on my system using the IX_ID index. A select on MyView (code follows) makes 2 logical reads create view dbo.MyView with schemabinding as select count_big(*) as cnt from dbo.MyTable go create unique clustered index cuidx on MyView(Cnt) go select cnt from MyView with (noexpand) It is interessting the noexpand hint is necessary with more then 1000000 rows in MyTable on my system... (with Express Editions you need this hint) And, the inserts are faster without the IX_ID Index. The update of the indexed view costs almost nothing. To check this I used the number function from Steve Kass (http://stevekass.com/2006/06/03/how-to-generate-a-sequence-on-the-fly/) and this statement: insert into MyTable(Payload) select replicate('ABC', 100) from dbo.numbers(1, 100000) Please check the plan. Maybe I have overseen something.

  • Comment from: =tg=
    2011-01-28 @ 10:59:41

    Log shipping is a great way to do unfortunately with the build in log shipping you cant use multiple locations, but you can write your own log shipping to do so

  • *****
    Comment from: cmu
    2011-01-28 @ 10:02:38

    Good idea! Additionaly you can set up log shipping to proof the consistency of your log backups.

  • *****
    Comment from: Frank Kalis
    2011-01-26 @ 20:33:36

    LOL. Agreed, in that table dimensions I guess it is really not worth it departing from the "standard" way of doing things. :-)

  • *****
    Comment from: =tg=
    2011-01-26 @ 15:53:15

    The actual problem was in the size of about 0 to 2000 rows, so not a huge table, the exact count was not 100% relevant, we thought about query in the system tables too but we found out that the actual work for sql server in this size of table was smaller letting him count than finding the right object, the coresponding partitions and then reading the result. Sure there is overhead for the extra index but the counting was done much more often then inserts. for large tables I totaly agree the system tables are the much better way to go

  • *****
    Comment from: Frank Kalis
    2011-01-26 @ 14:22:49

    If I were to perform a COUNT(*) constantly on a large table I would maybe revise this strategy and question the requirement at all. Even with a tailored index just to support that query, the actual work still has to be carried out by SQL Server and I wouldn't be surprised when the query still runs like a dog. However, if accuracy of the COUNT(*) isn't important at all, say for example, if you want to use this for some kind of paging, or track growth over time, or other stuff where you can live with a more or less "good approximation", it might be an option to get the row_count from the system tables such as sys.partitions. Of course, with the usual caveat, that system tables can change over time...

  • *****
    Comment from: Frank Kalis
    2011-01-24 @ 13:04:19

    Hi Thomas, a warm welcome from me again as well. Good to have you here! -- Cheers, Frank

  • *****
    Comment from: cmu
    2011-01-24 @ 12:01:41

    Hi Thomas, I've seen you at the European PASS Conference. Great to see you here again! Cheers, Christoph

  • *****
    Comment from: tosc
    2011-01-24 @ 10:01:30

    Hi Thomas, welcome! I wish you a nice day, Torsten