Tips to Improve LINQ to SQL Performance
LINQ to SQL is a powerful technology that can do as much harm as good if it is mis-used. Here is how to get more out of your LINQ to SQL efforts.
That's where the ObjectTrackingEnabled property of the DataContext comes in. By default this property is true, which causes LINQ to SQL to keep track of every change you make to your data in case you want to save those changes later. If you know you're only reading data and won't be changing it, I suggest setting this property to false. Then all of that change-tracking overhead goes away.
The performance boost does come at a small price, however. Deferred loading will no longer work. You'll have to use LoadWith or manually extract related data yourself using projections.
The answer is it'll happily issue a 'SELECT' statement asking for all 20 fields in the Customer table so it can populate every field of the Customer entity it gives me back (even though I only care about three fields). That seems a bit wasteful, don't you agree?
Now suppose I re-wrote the code to look like this:
This time LINQ to SQL will issue a 'SELECT' statement asking only for the three fields I care about because I gave it enough information before it executed the query to know specifically what I wanted.
The technique I used in the second example is called projecting (or shaping) the data. Using an anonymous type, I tell LINQ to SQL exactly what I'm interested in, and that's all it asks for. For a single customer record this likely won't make a big difference, but if I'm retrieving a list of thousands or millions of customer records, the performance gain will be significant.
One real-world scenario where I've found this technique helpful is in returning data from a WCF service. I often don't want to return entire entities or an entire object graph from my service calls, so I project the data from LINQ to SQL into Data Transfer Objects (DTOs), which I send across the wire instead. These DTOs contain only the data I need and improve the performance of my service by cutting down on the payload I'm sending across the wire.
The only downside I've found with using projections is losing the ability to use the LINQ-generated entity classes (like the Customer class for customer records). This means I lose some entity-dependent features like deferred loading of child entities and state tracking. Since I tend to use projections mostly when I'm pulling lots of records for display or analysis purposes (and not for updating data), I'm generally okay with that tradeoff
To see how LINQ to SQL does this, take a look at the entities LINQ to SQL generates for you. You'll see that each property on an entity has an UpdateCheck attribute, which has possible values of Always, Never, and WhenChanged.
By default, UpdateCheck is set to Always, which means LINQ to SQL will check the value you're saving for that property against what's in the database. (The checking comes in the form of extra qualifiers added to the 'WHERE' clause of your update query.) If they're different, LINQ to SQL assumes the record has changed since you last retrieved it and throws a concurrency exception.
Unless you need optimistic concurrency checking in your application, I recommend setting UpdateCheck to Never for the properties on your entities. This will greatly speed up your queries by eliminating all the extra checking that is done during updates. For many applications, this "last write wins" type of approach is perfectly acceptable.
If, however, you do need optimistic concurrency checking in your application, then I suggest adding a column of type timestamp to each of your tables in SQL Server that you want checked. I often do this by adding a column called "RowVersion" to my tables. Then when LINQ to SQL generates an entity class from that table, it will automatically set the UpdateCheck property to Never for all properties in the entity and use the timestamp column in the table for concurrency checking instead.
My LINQ query might look like this:
And LINQ to SQL, in turn, will generate this:
It added a variable called "@p0" which will be assigned a value of 1 when the query is run.
Now obviously this is a contrived example, and a simple query like this with a single parameter will run pretty quickly. However, I've noticed through real-world testing that the greater the number of parameters, the slower LINQ to SQL performs. This problem is especially likely to manifest itself if your LINQ query uses the Contains() method, which LINQ to SQL translates to a 'WHERE IN' clause that takes the form of 'WHERE IN (@p0, @p1, @p2, ... ).'
If you have a query that's performing slowly, check the number of parameters being generated by LINQ to SQL. Try to limit the number of parameters to around a dozen or so at most. There's no magic number, but I've seen noticeable delays in my applications as soon as I start getting a little over a dozen parameters or so in my queries.
By Mohd Zulkamal
NOTE : – If You have Found this post Helpful, I will appreciate if you can Share it on Facebook, Twitter and Other Social Media Sites. Thanks =)
Tip 1: Ditch the Extra Baggage with ObjectTrackingEnabled
LINQ to SQL, by default, behaves like it's going on a every possibility. It carries every provision it thinks it'll ever need, and your application carries that extra weight. If you know you're only going for a short hike, why not tell it to leave the pack at home?That's where the ObjectTrackingEnabled property of the DataContext comes in. By default this property is true, which causes LINQ to SQL to keep track of every change you make to your data in case you want to save those changes later. If you know you're only reading data and won't be changing it, I suggest setting this property to false. Then all of that change-tracking overhead goes away.
The performance boost does come at a small price, however. Deferred loading will no longer work. You'll have to use LoadWith or manually extract related data yourself using projections.
Tip 2: Slim Down Your Queries with Projections
Let's say I have a Customer table with 20 fields, but I'm only interested in three: FirstName, LastName, and Email. What do you suppose LINQ to SQL will do if I execute this code?var customer =
(from cust in dataContext.Customers
where cust.ID == customerID
select cust).Single();
var customerLite = new {
customer.FirstName,
customer.LastName,
customer.Email
};
The answer is it'll happily issue a 'SELECT' statement asking for all 20 fields in the Customer table so it can populate every field of the Customer entity it gives me back (even though I only care about three fields). That seems a bit wasteful, don't you agree?
Now suppose I re-wrote the code to look like this:
var customerLite =
(from cust in dataContext.Customers
where cust.ID == customerID
select new {
customer.FirstName,
customer.LastName,
customer.Email
}).Single();
This time LINQ to SQL will issue a 'SELECT' statement asking only for the three fields I care about because I gave it enough information before it executed the query to know specifically what I wanted.
The technique I used in the second example is called projecting (or shaping) the data. Using an anonymous type, I tell LINQ to SQL exactly what I'm interested in, and that's all it asks for. For a single customer record this likely won't make a big difference, but if I'm retrieving a list of thousands or millions of customer records, the performance gain will be significant.
One real-world scenario where I've found this technique helpful is in returning data from a WCF service. I often don't want to return entire entities or an entire object graph from my service calls, so I project the data from LINQ to SQL into Data Transfer Objects (DTOs), which I send across the wire instead. These DTOs contain only the data I need and improve the performance of my service by cutting down on the payload I'm sending across the wire.
The only downside I've found with using projections is losing the ability to use the LINQ-generated entity classes (like the Customer class for customer records). This means I lose some entity-dependent features like deferred loading of child entities and state tracking. Since I tend to use projections mostly when I'm pulling lots of records for display or analysis purposes (and not for updating data), I'm generally okay with that tradeoff
Tip 3: Optimize Your Optimistic Concurrency Checking
LINQ to SQL will help us ensure that multiple users working with the same record don't overwrite each other's changes. It does this by enabling optimistic concurrency checking on all database updates by default and alerting us if it detects we're trying to update a record that has changed since we last retrieved it.To see how LINQ to SQL does this, take a look at the entities LINQ to SQL generates for you. You'll see that each property on an entity has an UpdateCheck attribute, which has possible values of Always, Never, and WhenChanged.
By default, UpdateCheck is set to Always, which means LINQ to SQL will check the value you're saving for that property against what's in the database. (The checking comes in the form of extra qualifiers added to the 'WHERE' clause of your update query.) If they're different, LINQ to SQL assumes the record has changed since you last retrieved it and throws a concurrency exception.
Unless you need optimistic concurrency checking in your application, I recommend setting UpdateCheck to Never for the properties on your entities. This will greatly speed up your queries by eliminating all the extra checking that is done during updates. For many applications, this "last write wins" type of approach is perfectly acceptable.
If, however, you do need optimistic concurrency checking in your application, then I suggest adding a column of type timestamp to each of your tables in SQL Server that you want checked. I often do this by adding a column called "RowVersion" to my tables. Then when LINQ to SQL generates an entity class from that table, it will automatically set the UpdateCheck property to Never for all properties in the entity and use the timestamp column in the table for concurrency checking instead.
Tip 4: Keep the Number of Parameters Down
LINQ to SQL always uses parameterized queries to prevent SQL injection attacks. For example, suppose I have a Product table I wish to query for a single product.My LINQ query might look like this:
var query =
from p in dataContext.Products
where p.ProductID = 1
select p;
And LINQ to SQL, in turn, will generate this:
SELECT [t0].[ProductID], [t0].[ProductName], [t0].[CategoryID]
FROM [Products] AS [t0]
WHERE [t0].[ProductID] = @p0
It added a variable called "@p0" which will be assigned a value of 1 when the query is run.
Now obviously this is a contrived example, and a simple query like this with a single parameter will run pretty quickly. However, I've noticed through real-world testing that the greater the number of parameters, the slower LINQ to SQL performs. This problem is especially likely to manifest itself if your LINQ query uses the Contains() method, which LINQ to SQL translates to a 'WHERE IN' clause that takes the form of 'WHERE IN (@p0, @p1, @p2, ... ).'
If you have a query that's performing slowly, check the number of parameters being generated by LINQ to SQL. Try to limit the number of parameters to around a dozen or so at most. There's no magic number, but I've seen noticeable delays in my applications as soon as I start getting a little over a dozen parameters or so in my queries.
By Mohd Zulkamal
NOTE : – If You have Found this post Helpful, I will appreciate if you can Share it on Facebook, Twitter and Other Social Media Sites. Thanks =)