Archive

Archive for the ‘In-MemoryOLTP’ Category

Resource Governor and In-Memory OLTP

September 23, 2015 Leave a comment

from BOL

 

A resource pool represents a subset of physical resources that can be governed. By default, SQL Server databases are bound to and consume the resources of the default resource pool. To protect SQL Server from having its resources consumed by one or more memory-optimized tables, and to prevent other memory users from consuming memory needed by memory-optimized tables, you should create a separate resource pool to manage memory consumption for the database with memory-optimized tables.

A database can be bound on only one resource pool. However, you can bind multiple databases to the same pool. SQL Server allows binding a database without memory-optimized tables to a resource pool but it has no effect. You may want to bind a database to a named resource pool if, in future, you may want to create memory-optimized tables in the database.

Before you can bind a database to a resource pool both the database and the resource pool must exist. The binding takes effect the next time the database is brought online

 

USE [master]
GO

CREATE RESOURCE POOL [mem_xtp_pool] WITH(min_cpu_percent=0,
        max_cpu_percent=100,
        min_memory_percent=50,
        max_memory_percent=60,
        cap_cpu_percent=100,
        AFFINITY SCHEDULER = AUTO
,
        min_iops_per_volume=0,
        max_iops_per_volume=0)

GO

Bind the database to the resource pool.

EXEC sp_xtp_bind_db_resource_pool 'DatabaseName', 'mem_xtp_pool' GO

Confirm the binding.The resource pool id for database should be NULL.

SELECT d.database_id, d.name, d.resource_pool_id
FROM sys.databases d
GO

 

Make the binding effective

Take the database offline and back online after binding it to the resource pool

 

ALTER DATABASE DatabaseName SET OFFLINE

GO

ALTER DATABASE DatabaseName SET ONLINE

Categories: In-MemoryOLTP

xtp Transaction stuck

September 23, 2015 Leave a comment

 

Why the xtp transaction do not kill or rollback when a long running process is stopped?

 

A long running transaction inserting millions of records in a In-Memory OLTP Table  is stopped but the spid remains in running state, it do not kill or rollback.

 

Even you try to kill the session it do not actually kill the session. Run the Kill spid WITH STATUSONLY. It might give the below message

SPID 56: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds.

 

We checked the sys.databases log_reuse_wait_desc column and it says waiting for xtp_checkpoint

 

image

 

We never thought the xtp transaction might keep on waiting for a checkpoint.

 

Before we manually issued the checkpoint we enabled TRACE flag 3502 & 3605 to see what happens in the background

 

DBCC TRACEON (3502, 3605, -1);

checkpoint

 

image

Categories: In-MemoryOLTP

BUCKET_COUNT for Hash Indexes for SQL Server Memory Optimized Tables

September 12, 2015 Leave a comment

 

Indexes are used as entry points for memory-optimized tables.Data in memory is quickly read by index to locate in-memory data.

[From BOL]

A hash index consists of a collection of buckets organized in an array. A hash function maps index keys to corresponding buckets in the hash index. The following figure shows three index keys that are mapped to three different buckets in the hash index. For illustration purposes the hash function name is f(x).

Index keys mapped to different buckets.

 

The hashing function used for hash indexes has the following characteristics:

  • SQL Server has one hash function that is used for all hash indexes.

  • The hash function is deterministic. The same index key is always mapped to the same bucket in the hash index.

  • Multiple index keys may be mapped to the same hash bucket.

  • The hash function is balanced, meaning that the distribution of index key values over hash buckets typically follows a Poisson distribution.

    Poisson distribution is not an even distribution. Index key values are not evenly distributed in the hash buckets. For example, a Poisson distribution of n distinct index keys over n hash buckets results in approximately one third empty buckets, one third of the buckets containing one index key, and the other third containing two index keys. A small number of buckets will contain more than two keys.

If two index keys are mapped to the same hash bucket, there is a hash collision. A large number of hash collisions can have a performance impact on read operations.

The in-memory hash index structure consists of an array of memory pointers. Each bucket maps to an offset in this array. Each bucket in the array points to the first row in that hash bucket. Each row in the bucket points to the next row, thus resulting in a chain of rows for each hash bucket, as illustrated in the following figure.

 

The in-memory hash index structure.

 

The figure has three buckets with rows. The second bucket from the top contains the three red rows. The fourth bucket contains the single blue row. The bottom bucket contains the two green rows. These could be different versions of the same row.

 

Create In-Memory OLTP table

 

USE [AdventureWorks2014]
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [Sales].[SalesOrderDetail_imoltp4]
(
	[SalesOrderID] [int] NOT NULL,
	[SalesOrderDetailID] [int] IDENTITY(1,1) NOT NULL,
	[CarrierTrackingNumber] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
	[OrderQty] [smallint] NOT NULL,
	[ProductID] [int] NOT NULL,
	[SpecialOfferID] [int] NOT NULL,
	[UnitPrice] [money] NOT NULL,
	[UnitPriceDiscount] [money] NOT NULL CONSTRAINT [DF_SalesOrderDetail_UnitPriceDiscount4]  DEFAULT ((0.0)),
	[ModifiedDate] [datetime] NOT NULL CONSTRAINT [DF_SalesOrderDetail_ModifiedDate4]  DEFAULT (getdate()),

INDEX [IX_SalesOrderDetail_ProductId_4] NONCLUSTERED 
(
	[ProductID] ASC
),
CONSTRAINT [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID_IN4] PRIMARY KEY NONCLUSTERED HASH 
(
	[SalesOrderID],
	[SalesOrderDetailID]
)WITH ( BUCKET_COUNT = 128)
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )
GO

 

Created PRIMARY KEY NONCLUSTERED HASH Index with BUCKET_COUNT=128. BUCKET_COUNT indicates the no of buckets in the hash index.In this case we have 128 buckets.The max value we can have for BUCKET_COUNT=1,073,741,824

 

CONSTRAINT [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID_IN4] PRIMARY KEY NONCLUSTERED HASH 
(
	[SalesOrderID],
	[SalesOrderDetailID]
)WITH ( BUCKET_COUNT = 128)

 

using Dynamic Management view we can monitor how BUCKET_COUNT is used.sys.dm_db_xtp_hash_index_stats

image

 

select * from sys.dm_db_xtp_hash_index_stats

 

image

 

HASH Index Statistics

 

Execute the below code to see how BUCKET_COUNT is used.

SELECT 
   object_name(hs.object_id) AS 'object name', 
   i.name as 'index name', 
   i.index_id,
   hs.total_bucket_count,
   hs.empty_bucket_count,
   floor((cast(empty_bucket_count as float)/total_bucket_count) * 100) AS 'empty_bucket_percent',
   hs.avg_chain_length, 
   hs.max_chain_length
FROM sys.dm_db_xtp_hash_index_stats AS hs 
   JOIN sys.indexes AS i 
   ON hs.object_id=i.object_id AND hs.index_id=i.index_id

 

 

image

 

As you can see PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID_IN4 Primary Key Hash Index has a total of 128 BUCKET_COUNT, and as we insert the records  empty_bucket_count will decrease, max_chain_length column will show no of records in each bucket.

 

 

image

 

image

 

After inserting few  millions of records all the BUCKET_COUNT where used and each BUKCET has 90156 rows

 

image

Categories: In-MemoryOLTP

In-Memory OLTP – Part 12

July 14, 2015 Leave a comment

 

Monitor XTP Transaction

Couple of dmv’s where provided to monitor In-Memory transactions

sys.dm_db_xtp_transactions

sys.dm_xtp_transaction_recent_rows

sys.dm_xtp_transaction_stats

 

sys.dm_db_xtp_transactions : Reports the active transactions in the In-Memory OLTP database engine.

 

image

The result of this transaction will have below possible values.

0 – IN PROGRESS

1 – SUCCESS

2 – ERROR

3 – COMMIT DEPENDENCY

4 – VALIDATION FAILED (RR)

5 – VALIDATION FAILED (SR)

6 – ROLLBACK

 

sys.dm_xtp_transaction_stats:Reports statistics about transactions that have run since the server started.

 

image

Categories: In-MemoryOLTP

In-Memory OLTP –Part 11

July 8, 2015 Leave a comment

Garbage Collection of Checkpoint Files

What is Garbage Collection and how it works?

 

From:BOL

 

A Data row which has been deleted by a transaction that is no longer active.a stale row is considered or is eligible for garbage collection.The deleted transactions are written in DELTA files.

 

  • Non-blocking. Garbage collection is distributed over time with minimal impact on the workload.

  • Cooperative. User transactions participate in garbage collection with main garbage-collection thread.

  • Efficient. User transactions delink stale rows in the access path (the index) being used. This reduces the work required when the row is finally removed.

  • Responsive. Memory pressure leads to aggressive garbage collection.

  • Scalable. After commit, user transactions do part of the work of garbage collection. The more transaction activity, the more the transactions delink stale rows.

Garbage collection is controlled by the main garbage collection thread.The main garbage collection thread runs every 1 min, or when the number of committed transactions exceeds an internal threshold.The task of the garbage collector is to:

 

After a user transaction commits, it identifies all queued items associated with the scheduler it ran on and then releases the memory. If the garbage collection queue on the scheduler is empty, it searches for any non-empty queue in the current NUMA node. If there is low transactional activity and there is memory pressure, the main garbage-collection thread can access garbage collect rows from any queue. If there is no transactional activity after (for example) deleting a large number of rows and there is no memory pressure, the deleted rows will not be garbage collected until the transactional activity resumes or there is memory pressure.

 

Below are full steps involved to show basically garbage collection works

 

Add In-Memory OLTP File group with 2 containers (file stream files), the files will distributed in two containers, the first container will be used for DATA and second container will be used for DELTA

 

image

Check the no of files created.I have 16 GB Memory with one Physical CPU , dual core  with 4 logical processors.

 

select state,file_type_desc, state_desc, internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, drop_table_deleted_row_count
from sys.dm_db_xtp_checkpoint_files order by  file_type_desc, upper_bound_tsn

 

image

 

select file_type_desc, state_desc,relative_file_path, internal_storage_slot
from sys.dm_db_xtp_checkpoint_files  order by container_id, file_type_desc, upper_bound_tsn

 

image

image

image

 

Insert 90000 records and 2 new CPF’s in under construction were created.

 

image

Let’s insert some more records

The column file_size_used_in_bytes will be updated after the next checkpoint.

 

image

Automatic checkpoint completion: It is done when the transaction log since the last automatic checkpoint exceeds 512MB. In other words, an automatic checkpoint is kicked off for every 512MB of log records accumulated containing changes both from memory-optimized and diskbased tables. Unlike checkpoint for disk-based tables, the persistence of the data for memoryoptimized tables is done continuously by a background thread, the completion of a checkpoint updates internal metadata information

The Under Construction files will move to Active

image

 

image

image

 

I have disabled the Automatic Merge to issue a manually merge for demo

 

exec sys.sp_xtp_merge_checkpoint_files adventureworks2014, 2, 44

 

select * from sys.dm_db_xtp_merge_requests

 

image

Perform a log backup for log truncation checkpoint, to get the merge request installed.

clip_image002

Check the files which were merged.

select state,file_type_desc, state_desc, internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, 
drop_table_deleted_row_count from sys.dm_db_xtp_checkpoint_files  where state = 4
order by container_id, file_type_desc, upper_bound_tsn

 

image

Once the Merge is complete, file should be in required for backup state

image

 

image

After the backup and log truncation checkpoint [REQUIRED FOR BACKUP/HA] files will be moved to  TOMBSTONE

image

 

Issue a manual garbage collection

EXEC sp_filestream_force_garbage_collcection

 

image

 

Files marked as Tombstone were removed from the disk

image

After several checkpoints

clip_image002[4]

Categories: In-MemoryOLTP

In-Memory OLTP –Part 10

July 2, 2015 Leave a comment

Merging of Checkpoint Files

The set of files involved in a checkpoint grows with each check-point. However the active content of a data file decreases as more and more of its versions are marked as deleted in the corresponding delta file. Since the recovery process will read the contents of all data and delta files in the checkpoint, performance of crash recovery degrades as the relevant number of rows in each data file decreases

The checkpoint system hands this through an automatic process called merge. SQL Server 2014 implements the following merge policy:

· A merge is scheduled if 2 or more consecutive CFPs can be consolidated, after accounting for deleted rows, such that the resultant rows can fit into 1 CFP of ideal size. The ideal size of CFP is determined as follows

  • For machines < 16GB. The data file is 16MB and delta file is 1MB
  • For machines > 16GB, the data file is 128MB and delta file is 8MB

· A single CFP can be self-merged the data file exceeds 256 MB and over half of the rows are deleted. A data file can grow larger than 128MB if, for example, a single transaction or multiple concurrent transactions insert/update large amount of data forcing the data file to grow beyond its ideal size because a transaction cannot span multiple CFPs.

There are 2 ways to merge CPF files.

  • Automatic Merge

Will cover this later.

  • Manual Merge (sys.sp_xtp_merge_checkpoint_files)
In most cases, the automatic merging of checkpoint files will be sufficient to keep the number of files to a manageable number. However, in rare situations, or for testing purposes, you might want to use a manual merge.

To manual merge find the min and max tsn, use sys.sp_xtp_merge_checkpoint_files with database name,min and max tsn

select min(lower_bound_tsn),max(upper_bound_tsn) from sys.dm_db_xtp_checkpoint_files cf

image

 

exec sys.sp_xtp_merge_checkpoint_files adventureworks2014, 379, 398

Once the Request is placed , check the status on the merge request using sys.dm_db_xtp_merge_requests

select * from sys.dm_db_xtp_merge_requests

image

 

  • Requested means that a merge request exists.

  • Pending means that the merge is being processing.(Request can be in pending state,if a checkpoint is required.)

  • Installed means that the merge is complete.

  • Abandoned means that the merge could not complete, perhaps due to lack of storage.

 

select file_type_desc, state_desc, internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, drop_table_deleted_row_count 
from sys.dm_db_xtp_checkpoint_files where state_desc='PRECREATED'
order by container_id, file_type_desc, upper_bound_tsn

There are 16 files under ‘PRECREATED’ state.The demo was run on machine with 4 logical processors with 16GB memory

  • PRECREATED – A small set of Checkpoint File Pairs (CFPs) are kept pre-allocated to minimize or eliminate any waits to allocate new files as transactions are being executed. These are full sized with data file size of 128MB and delta file size of 8 MB, but contain no data. The number of CFPs is computed as the number of logical processors or schedulers, with a minimum of 8. This is a fixed storage overhead in databases with memory-optimized tables.

select file_type_desc, state_desc, internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, drop_table_deleted_row_count 
from sys.dm_db_xtp_checkpoint_files where state_desc='PRECREATED'
order by container_id, file_type_desc, upper_bound_tsn

 

image

 

There are 3 data files marked ‘UNDER CONSTRUCTION’ as checkpoint has not been taken but these files together store 2700000 rows that was inserted. If database is restarted, the data rows will loaded using transaction log and NOT from these files as they are not part of a durable checkpoint

select file_type_desc, state_desc, relative_file_path,internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, drop_table_deleted_row_count 
from sys.dm_db_xtp_checkpoint_files where state_desc='UNDER CONSTRUCTION'
order by container_id, file_type_desc, upper_bound_tsn

 

image

 

image

 

You will notice that there will be 3 active files as soon as  the checkpoint occur and data will be written to these file

  • ACTIVE – These contain the inserted/deleted rows from previous closed checkpoints. These CFPs contain all required inserted/deleted rows required before applying the active part of the transaction log at the database restart. We expect that size of these CFPs to be approximately 2x of the in-memory size of memory-optimized tables, assuming the merge operation is keeping up with the transactional workload.

select file_type_desc, state_desc,relative_file_path, internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, drop_table_deleted_row_count 
from sys.dm_db_xtp_checkpoint_files where state_desc='Active'
order by container_id, file_type_desc, upper_bound_tsn

 

image

Now, execute a manual checkpoint by executing the following command and see the state transition in the CFPs. You will note that UNDER CONSTRUCTION CFPs are now marked ACTIVE as they are part of a durable checkpoint that we have just completed. The manual checkpoint closed the data file with internal_storage_slot . These data file can potentially be merged in future as it qualifies under the merge policy.

image

 

Let’s issue the CHECKPOINT

 

CHECKPOINT

select file_type_desc, state_desc, internal_storage_slot,file_size_in_bytes, file_size_used_in_bytes,
inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn, last_backup_page_count, drop_table_deleted_row_count
from sys.dm_db_xtp_checkpoint_files where state = 2 order by container_id, file_type_desc, upper_bound_tsn

 

image

Let’s force a manual merge by executing the below command, the Merge will merge the active files

select min(lower_bound_tsn),max(upper_bound_tsn) from sys.dm_db_xtp_checkpoint_files

image

exec sys.sp_xtp_merge_checkpoint_files adventureworks2014, 392, 415

Check the Merge Request

select * from sys.dm_db_xtp_merge_requests

 

image

 

As we have forced the Merge  you will see the file marked [MERGE TARGET].

 

image

 

The Merge request will remain in Pending state if any of the file is pending for Backup or for a checkpoint, In this case a backup is required to install the Merge Request

 

image

 

Lets do a diff backup, as the backup kicks in Merge Request will  be installed, and CPF files will be marked as [MERGED SOURCE]

image 

image

 

Here is the output of the DMVs. It shows that the MERGE TARGET is now changed to ACTIVE state and all the CFPs that were used as source of the MERGE are now marked as MERGED SOURCE.

image

 

***state REQUIRED FOR BACKUP/HA meaning that these CFPs can’t be garbage collected as they are needed for operational correctness of the database with memory-optimized table

 

image

Categories: In-MemoryOLTP

In-Memory OLTP –Part 9

June 22, 2015 Leave a comment

How is my Data written?

 

The data in memory-optimized tables is stored as free-form data rows that are linked through one or more in-memory indexes, in memory. There are no page structures for data rows, such as those used for disk-based tables. When the application is ready to commit the transaction, the In-Memory OLTP generates the log records for the transaction. The persistence of memory-optimized tables is done with a set of data and delta files using a background thread. The data and delta files are located in one or more containers (using the same mechanism used for FILESTREAM data). These containers are mapped to a new type of filegroup, called a memory-optimized filegroup.

Data is written to these files in a strictly sequential fashion, which minimizes disk latency for spinning media. You can use multiple containers on different disks to distribute the I/O activity. Data and delta files in multiple containers on different disks will increase recovery performance when data is read from the data and delta files on disk, into memory.

An application does not directly access data and delta files. All data reads and writes use in-memory data.

 

At this moment I don’t have any pending checkpoints to be written to Data and Delta files , so it is as good as a clean In-Memory Table

Forced checkpoint manually.

image

Lets insert couple of records,900 records inserted

 

image

As we know , system default checkpoint occurs when the log has growth up to 512 MB after the last checkpoint.Let check how much log has been generated from out last insert.

 

image

Since last checkpoint the log has grown less then 1 MB, no checkpoint will occurs as it does not qualify the 512 MB mark.So  the data remains in the log.To check that we will see the use of undocumented function sys.fn_dblog_xtp which is similar to sys.fn_dblog but specifically for In-Memory objects.

 

So, we have done 900 inserts in the table which can been seen in the log.these transactions are still waiting to be written to the DETA File.

 

image

Issues a manual checkpoint to force the log records to flush out and be written on the DETA files.Below are set of events gets captured in the database log,

 

All the log records will be flushed out

image 

Description

XTP chained record ver 3: log_Discriminator = 0x00000001, log_prevRec = 00000000:00000000:0000

XTP chained record ver 3: log_Discriminator = 0x00000003, log_prevRec = 00000000:00000000:0000
XTP complete checkpoint ver 4: { LSN = ;000004f9:00000040:0003;, previous checkpoint completion LSN = ;000004f9:00000068:0003;, close LSN = ;000004f9:000000c8:0005;, CkptTs = 0x0000016d, SerializeTs = 0x0000016f };, { CkptDir ==> GUIDs = { Rowset 83a190b
HkFsStgCreateNewFile;0x01
Operation CREATE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-000000f8-0002
InsertFSV;0x01

Field m_typeFlagBits
Operation CLOSE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-000000f8-0002;CrLSN 000004f9-000000f8-0002;OpLSN 000004f9-00000100-0009;OldFileSz 0;FileSz 0;Off 0;Sz 0;Flg 0007

HkFsStgCreateNewFile;0x01
Operation CREATE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000110-0002
InsertFSV;0x01

Operation CLOSE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000110-0002;CrLSN 000004f9-00000110-0002;OpLSN 000004f9-00000110-0008;OldFileSz 0;FileSz 0;Off 0;Sz 0;Flg 0007

HkFsStgCreateNewFile;0x01
Operation CREATE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000118-0004
InsertFSV;0x01

Operation CLOSE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000118-0004;CrLSN 000004f9-00000118-0004;OpLSN 000004f9-00000118-000a;OldFileSz 0;FileSz 0;Off 0;Sz 0;Flg 0007

HkFsStgCreateNewFile;0x01
Operation CREATE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000120-0002
InsertFSV;0x01

Operation CLOSE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000120-0002;CrLSN 000004f9-00000120-0002;OpLSN 000004f9-00000120-0008;OldFileSz 0;FileSz 0;Off 0;Sz 0;Flg 0007

Action 0 (HOBTCOUNT) on rowset 72057594038321152. Leaf page count: 2, Reserved page count: 4, Used page count: 4
Action 1 (ROWSETCOUNT) on rowset 72057594038321152. Row count: 6.
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 1, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 2, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 3, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 4, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 5, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 6, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 7, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 8, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 9, mod count: 1390
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594038321152. Column Id: 10, mod count: 1390
Action 0 (HOBTCOUNT) on rowset 72057594038386688. Leaf page count: 1, Reserved page count: 2, Used page count: 2
Action 1 (ROWSETCOUNT) on rowset 72057594038386688. Row count: 6.
Action 0 (HOBTCOUNT) on rowset 72057594060144640. Leaf page count: 2, Reserved page count: 3, Used page count: 3
Action 1 (ROWSETCOUNT) on rowset 72057594060144640. Row count: 30.
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594060144640. Column Id: 1, mod count: 478
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594060144640. Column Id: 2, mod count: 702
Action 2 (ROWSETCOLUMNCOUNT) on rowset 72057594060144640. Column Id: 3, mod count: 478
Action 0 (HOBTCOUNT) on rowset 72057594060210176. Leaf page count: 1, Reserved page count: 2, Used page count: 2
Action 1 (ROWSETCOUNT) on rowset 72057594060210176. Row count: 30.

File Id 65537; Old Header Len = 447; New HeaderLen = 447; Old Container Size = 303038464; New Container Size = 338690048

log_minRecoveryLsn 000004f9:00000128:0017;log_replbeginlsn 00000000:00000000:0000;log_replnextlsn 00000000:00000000:0000;log_distbackuplsn 00000000:00000000:0000;log_distlastlsn 00000000:00000000:0000

XTP chained record ver 3: log_Discriminator = 0x00000001, log_prevRec = 00000000:00000000:0000

HkFsStgCreateNewFile;0x01
Operation CREATE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000170-0002
InsertFSV;0x01

XTP chained record ver 3: log_Discriminator = 0x00000000, log_prevRec = 00000000:00000000:0000
XTP chained record ver 3: log_Discriminator = 0x00000003, log_prevRec = 00000000:00000000:0000
XTP complete checkpoint ver 4: { LSN = ;000004f9:000000c8:0003;, previous checkpoint completion LSN = ;000004f9:000000f0:0002;, close LSN = ;000004f9:00000148:0005;, CkptTs = 0x00000173, SerializeTs = 0x00000176 };, { CkptDir ==> GUIDs = { Rowset 83a190b
Operation CLOSE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000170-0002;CrLSN 000004f9-00000170-0002;OpLSN 000004f9-00000178-0006;OldFileSz 0;FileSz 0;Off 0;Sz 0;Flg 0007

HkFsStgCreateNewFile;0x01
Operation CREATE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000188-0002
InsertFSV;0x01

Operation CLOSE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f9-00000188-0002;CrLSN 000004f9-00000188-0002;OpLSN 000004f9-00000188-0008;OldFileSz 0;FileSz 0;Off 0;Sz 0;Flg 0007

HkFsStgDeleteExistingFile;0x

GarbageCollector;0x

HkFsStgDeleteExistingFile;0x

HkFsStgDeleteExistingFile;0x

HkFsStgDeleteExistingFile;0x

HkFsStgDeleteExistingFile;0x

HkFsStgDeleteExistingFile;0x

HkFsStgCleanupStorageTable;0x

Operation DELETE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f7-00000038-0002

Operation DELETE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f7-00000050-0002

Operation DELETE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f8-000001b0-0002

Operation DELETE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f8-000001c8-0002

Operation DELETE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f8-00000100-0003

Operation DELETE;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f8-00000108-0003

GarbageCollector;0x

Operation GARBAGE COLLECTED;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f7-00000210-0002;FileSize 16777216

Operation GARBAGE COLLECTED;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f7-00000218-0002;FileSize 1048576

Operation GARBAGE COLLECTED;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f7-00000350-0002;FileSize 16777216

Operation GARBAGE COLLECTED;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f7-00000360-0002;FileSize 1048576

Operation GARBAGE COLLECTED;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f8-00000070-0002;FileSize 16777216

Operation GARBAGE COLLECTED;File Id 65537;Name 83a190bd-22eb-4014-90f3-119be1bd3aa5\2031cae3-31ca-4826-88b5-8966355c57d400004f8-00000088-0002;FileSize 1048576

Field m_typeFlagBits

 

select * from sys.fn_dblog_xtp(NULL, NULL) where [Transaction ID]='0000:0007ba6e'

image

 

image

Categories: In-MemoryOLTP