@if(Model.Message != null){
<div class="alert alert-warning">
<button type="button" class="close" data-dismiss="alert">×</button>
@Model.Message
</div>
}
This blog is titled the way it is, because this is the method by which I came to learn most of the following information. 🤪
Wednesday, December 31, 2014
Alert for Bootstrap
Do this if you want a Bootstrap alert at the top of your page. Swap out 'alert-warning' for 'alert-error' if you want red instead of yellow.
Delete a Remote Git Branch
To delete a branch from the server, do the following:
git push origin --delete serverfix
Saturday, December 27, 2014
File changes aren't ignored even when in the .gitignore file
If Git insists on checking in changes to a file even when the .gitignore file seems like it should be excluded, it is probably because the file is already in the repository. Git will not ignore a file if it has already been committed.
To resolve this, open a Bash prompt in the root folder of the repository. ( A shortcut for this in SourceTree is to click on the 'Actions/Open in Terminal' menu. Run the following:
To resolve this, open a Bash prompt in the root folder of the repository. ( A shortcut for this in SourceTree is to click on the 'Actions/Open in Terminal' menu. Run the following:
git rm --cached FolderName/MyFileName.extThen commit the change and push it up, and it should ignore the file from then on (assuming your .gitignore was correct to begin with). Note that this will leave the file as is in your source folder.
Tuesday, December 9, 2014
Call via HTTPClient to an SSL endpoint without a valid certificate
This isn't always a good idea, but sometimes you have to call an http endpoint using that has an SSL certificate that is not valid. To allow the connection and disregard the certificate error, you can do something like the following:
using (var client = new HttpClient(new HttpClientHandler()));
{
// this is the line that will ignore the certificate issues.
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
var response = taskClient.GetAsync(uri).Result;
}
Wednesday, September 17, 2014
Missing Indices in SQL Server
Run the following to show possible missing indices in SQL Server. Use this information carefully!!! Do your own evaluation to determine if you really need any of the indices this suggests. This is only a recommendation.
SELECT
mid.statement
,migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,OBJECT_NAME(mid.Object_id),
'CREATE INDEX [missing_index_' + CONVERT (VARCHAR, mig.index_group_handle) + '_' + CONVERT (VARCHAR, mid.index_handle)
+ '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
+ ' ON ' + mid.statement
+ ' (' + ISNULL (mid.equality_columns,'')
+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END
+ ISNULL (mid.inequality_columns, '')
+ ')'
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
Tuesday, August 12, 2014
Why I will be ditching SQL Azure in April 2015
UPDATE: See the end of this post for more information
As some may be aware, Microsoft is switching its SQL Database pricing model over to a new set of tiers in April 2015. Currently the tiers are 'Web' and 'Business'. The new tiers are 'Basic', 'Standard', and 'Premium'.
Recently, I decided to go ahead and switch over to the new pricing tiers, and get ahead of the game. I switched the 1.5GB database behind my website from 'Web' to 'Basic' over the weekend when there was a pretty light load. My thought was that if that was enough for my website, I would be fine, and if it had problems, I could quickly switch it over to Standard S1. Both tiers seem to be reasonably priced at $5/month and $20/month respectively for my size of database. I pay $20+/month for the Web tier right now anyway. (That's not what the pricing calculators tell you, but that doesn't take into account daily backups. Don't get me started.)
A Little Background
My website is fairly small, it has about 300 logins per day. Peak usage is probably around 20 people simultaneously logged in. A small website by any measure. A few months ago, I ran IIS, SQL Server Express, and MS Reporting Services all on a single virtual machine with a single core allocated to it and 1.7GB of RAM. Even on that small of a machine the website screamed!!
Back to the Present Day
I switched over to the Basic pricing tier of SQL Azure, and the website seemed to run OK over the weekend. But Monday morning hit, and after a handful of users got on, the website went to its knees! Users couldn't log in, others were reporting multiple errors. So I immediately switched over to the Standard S1 tier. The website came back online, but reports from multiple users during the day made it clear that performance had dropped significantly! The website limped along for the rest of the day, but this was no solution.
That left me with a couple options. Switch to the Standard S2 tier, and hope that the $100/month I was going to pay was enough to run my website. Or go back to the 'Web' version and hope that Microsoft gets their act together before April 2015.
The Bottom Line
SQL Azure Standard S1 is way underpowered. Much less than what a single core machine will give you in performance. Standard S2 is way overpriced, and may still not be enough horsepower to run a small website. I can't pass on another $100/month in hosting costs to my client with the explanation that "Microsoft is forcing us to pay it to continue the same level of service." And I'm not going to eat that extra cost. So in April, when the 'Web' version expires, I will be moving my database back onto a VM with SQL Server Express. It sucks that I am going to have to pick up all this extra maintenance, but it looks like I won't have many other options.
UPDATE - 9/15/2014:
I have decided, I can't wait until the April deadline, and have moved my data out of SQL Azure. I contacted some people on the SQL Azure team (via Scott Gu), and to their credit, they are very responsive. But in the end, I can't say I'm satisfied with their answers. The first thing they pointed out to me was that the Web database tier doesn't have a guaranteed level of service like their new offerings do. That means that if you happen to be sharing a CPU with some other apps, and they suddenly hit their database hard, you are going to pay the price. Moving to guaranteed performance levels is definitely what you want for a mission-critical application.
But my original gripe still remains, that SQL Azure is WAY underpowered for what they are charging, even for the lower cost levels they announced in September 2014. I had a chance to do some comparisons before I unplugged SQL Azure completely. I have a fairly intensive stored proc that I run at night. This is what I found when running that proc:
SQL Azure Standard S2: Stored proc took 2 minutes 49 seconds to execute
SQL Server Running on A1 VM (1.75GB RAM, 1 core CPU): 32 seconds to execute
That's right, the S2 tier is over 5 times slower than a VM with a single core. I realize that my results are not scientific, but I tried to level the playing field between the two as much as I could. Doing the 'DTU math' between the tiers, it looks like you will need the P2 level of performance just to get the equivalent of a single-core processor devoted to your database. That's almost $1000/month just in SQL Server charges. Does that sound unreasonable to anyone else? The downside is that if your database takes more than 10GB, you will have to pony up for a SQL Server license (move off the Express Edition), but even then it would only take a month or two to break even.
The New Bottom Line
If you have a small application, assume you will need the P2 or P3 level of performance if you are going to use SQL Azure (everything below that, will throttle your performance under even the smallest loads). If you have a medium size application (100+ simultaneous users), forget about SQL Azure! You won't ever get enough horsepower out of it.
As some may be aware, Microsoft is switching its SQL Database pricing model over to a new set of tiers in April 2015. Currently the tiers are 'Web' and 'Business'. The new tiers are 'Basic', 'Standard', and 'Premium'.
Recently, I decided to go ahead and switch over to the new pricing tiers, and get ahead of the game. I switched the 1.5GB database behind my website from 'Web' to 'Basic' over the weekend when there was a pretty light load. My thought was that if that was enough for my website, I would be fine, and if it had problems, I could quickly switch it over to Standard S1. Both tiers seem to be reasonably priced at $5/month and $20/month respectively for my size of database. I pay $20+/month for the Web tier right now anyway. (That's not what the pricing calculators tell you, but that doesn't take into account daily backups. Don't get me started.)
A Little Background
My website is fairly small, it has about 300 logins per day. Peak usage is probably around 20 people simultaneously logged in. A small website by any measure. A few months ago, I ran IIS, SQL Server Express, and MS Reporting Services all on a single virtual machine with a single core allocated to it and 1.7GB of RAM. Even on that small of a machine the website screamed!!
Back to the Present Day
I switched over to the Basic pricing tier of SQL Azure, and the website seemed to run OK over the weekend. But Monday morning hit, and after a handful of users got on, the website went to its knees! Users couldn't log in, others were reporting multiple errors. So I immediately switched over to the Standard S1 tier. The website came back online, but reports from multiple users during the day made it clear that performance had dropped significantly! The website limped along for the rest of the day, but this was no solution.
That left me with a couple options. Switch to the Standard S2 tier, and hope that the $100/month I was going to pay was enough to run my website. Or go back to the 'Web' version and hope that Microsoft gets their act together before April 2015.
The Bottom Line
SQL Azure Standard S1 is way underpowered. Much less than what a single core machine will give you in performance. Standard S2 is way overpriced, and may still not be enough horsepower to run a small website. I can't pass on another $100/month in hosting costs to my client with the explanation that "Microsoft is forcing us to pay it to continue the same level of service." And I'm not going to eat that extra cost. So in April, when the 'Web' version expires, I will be moving my database back onto a VM with SQL Server Express. It sucks that I am going to have to pick up all this extra maintenance, but it looks like I won't have many other options.
UPDATE - 9/15/2014:
I have decided, I can't wait until the April deadline, and have moved my data out of SQL Azure. I contacted some people on the SQL Azure team (via Scott Gu), and to their credit, they are very responsive. But in the end, I can't say I'm satisfied with their answers. The first thing they pointed out to me was that the Web database tier doesn't have a guaranteed level of service like their new offerings do. That means that if you happen to be sharing a CPU with some other apps, and they suddenly hit their database hard, you are going to pay the price. Moving to guaranteed performance levels is definitely what you want for a mission-critical application.
But my original gripe still remains, that SQL Azure is WAY underpowered for what they are charging, even for the lower cost levels they announced in September 2014. I had a chance to do some comparisons before I unplugged SQL Azure completely. I have a fairly intensive stored proc that I run at night. This is what I found when running that proc:
SQL Azure Standard S2: Stored proc took 2 minutes 49 seconds to execute
SQL Server Running on A1 VM (1.75GB RAM, 1 core CPU): 32 seconds to execute
That's right, the S2 tier is over 5 times slower than a VM with a single core. I realize that my results are not scientific, but I tried to level the playing field between the two as much as I could. Doing the 'DTU math' between the tiers, it looks like you will need the P2 level of performance just to get the equivalent of a single-core processor devoted to your database. That's almost $1000/month just in SQL Server charges. Does that sound unreasonable to anyone else? The downside is that if your database takes more than 10GB, you will have to pony up for a SQL Server license (move off the Express Edition), but even then it would only take a month or two to break even.
The New Bottom Line
If you have a small application, assume you will need the P2 or P3 level of performance if you are going to use SQL Azure (everything below that, will throttle your performance under even the smallest loads). If you have a medium size application (100+ simultaneous users), forget about SQL Azure! You won't ever get enough horsepower out of it.
Friday, July 18, 2014
Unable to determine principal end of relationship error in EF
Occasionally, you may get the following error:
So if you attempt to link the child to this parent by using this id, you will get the error above:
Unable to determine the principal end of the 'Model.FK_Parent_Child' relationship. Multiple added entities may have the same primary key.This is generally caused by attempting to use a primary key property before it has been initialized. For example. Let say that you have created the parent and child records, but have not saved either to the database yet. If the primary key on the parent is an identity field (let's call it 'ParentId', this means that the value has not been initialized.
So if you attempt to link the child to this parent by using this id, you will get the error above:
Child.ParentId = Parent.ParentIdInstead, you want to use the navigation properties to link these records.
Child.Parent = Parent
Friday, June 6, 2014
Determining the amount of space used by all tables in SQL Server
SELECT
t.NAME AS TableName,
s.Name AS SchemaName,
p.rows AS RowCounts,
SUM(a.total_pages) / 125 AS TotalSpaceMB,
SUM(a.used_pages) / 125 AS UsedSpaceMB,
(SUM(a.total_pages) - SUM(a.used_pages)) / 125 AS UnusedSpaceMB
FROM
sys.tables t
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
sys.schemas s ON t.schema_id = s.schema_id
WHERE
t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, s.Name, p.Rows
ORDER BY
TotalSpaceMB desc
Wednesday, January 29, 2014
Improving bulk insert performance in Entity framework
Sometimes Entity Framework will have poor performance when inserting records. A possible workaround is to set the following flags:
This fix requires more research to determine if there are side-effects to setting these. If this is a shared context, then the flags should be set back to their original values after 'SaveChanges' has been called.
Note that this only works with DBContext,and not ObjectContext
yourContext.Configuration.AutoDetectChangesEnabled = false;
yourContext.Configuration.ValidateOnSaveEnabled = false;
This should be done before the calls to 'AddObject'. In my case, this reduced the time a piece of code took to process 2,000 records from 8 minutes down to about 10 seconds.This fix requires more research to determine if there are side-effects to setting these. If this is a shared context, then the flags should be set back to their original values after 'SaveChanges' has been called.
Note that this only works with DBContext,and not ObjectContext
Monday, January 6, 2014
Moving a Repository from SVN to Git on VisualStudio.com
- Create a new folder.
- Launch a Git Bash prompt and navigate to this folder.
- Use git to clone the SVN repository to this folder.
- git svn clone http://svn/repo/here/trunk (Taken from http://stackoverflow.com/a/79178/224531)
- This may take a while if there is a lot of history. But the end result is a stand-alone Git repo with all the history in it.
- Go the TFS site (visualstudio.com), and create a new project using Git as the source control.
- After this is created open the project and click on the 'Code' menu.
- It should show that the repository is empty. Follow the steps on this page, using the Bash command prompt, to attach the local repo to this new origin, and push the changes to the TFS site.
Friday, January 3, 2014
Fixing 500 Errors with IISExpress
When an app throws 500 errors in IIS Express with no additional debugging information, try the following:
- Close VS Studio - solution set with IISExpress
- Go to: /Document/IISExpress/config/ in your profile
- Rename or delete applicationhost.config
- Open your solution in VS Studio
- A dialog may fire up from IISExpress - this will set a fresh config.
- try to run your web app.
Subscribe to:
Posts (Atom)