Skip to content

Feed aggregator

Read Barcodes from Excel file

C-Sharpcorner - Latest Articles - 4 hours 40 min ago
In my last article, I present you a solution to generate many barcodes. And then store the barcode images in Excel file.
Categories: Communities

Staying Viable In The Software Design Business: A Case For Deterrence

C-Sharpcorner - Latest Articles - 6 hours 12 min ago
In this article you will learn how to Staying Viable In The Software Design Business: A Case For Deterrence.
Categories: Communities

Building Your First Windows Store App

C-Sharpcorner - Latest Articles - 7 hours 33 min ago
In this article, you will learn how to write your first “Hello World!” application for Windows Store. You will need Visual Studio 2013 or Visual Studio 2014 to follow this tutorial. I used Visual Studio 2014.
Categories: Communities

For proven in-memory technology without costly add-ons, migrate your Oracle databases to SQL Server 2014

Today, we are making available a new version of SQL Server Migration Assistant (SSMA), a free tool to help customers migrate their existing Oracle databases to SQL Server 2014. Microsoft released SQL Server 2014 earlier this year, after months of customer testing, with features such as In-Memory OLTP to speed up transaction performance, In-Memory Columnstore to speed up query performance, and other great hybrid cloud features such as backup to cloud directly from SQL Server Management Studio and the ability to utilize Azure as a disaster recovery site using SQL Server 2014 AlwaysOn.

Available now, the SQL Server Migration Assistant version 6.0 for Oracle databases, greatly simplifies the database migration process from Oracle databases to SQL Server. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing to reduce cost and reduce risk of database migration projects. Moreover, SSMA version 6.0 for Oracle databases brings additional features such as automatically moving Oracle tables into SQL Server 2014 in-memory tables, the ability to process 10,000 Oracle objects in a single migration, and increased performance in database migration and report generation.

Many customers have realized the benefits of migrating their database to SQL Server using previous versions of SSMA. For example:

SSMA for Oracle is designed to support migration from Oracle 9i or later version to all editions of SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012 and SQL Server 2014. The SSMA product team is also available to answer your questions and provide technical support at

To download SSMA for Oracle, go here. To evaluate SQL Server 2014, go here.  

Categories: Companies

Task List Feature in Visual Studio

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 18:25
This is all about the Task List window that is helpful in managing comments (to-do lists).
Categories: Communities

Sample Exceptions in C#

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 18:11
In this article you will learn about Exceptions in C# with examples.
Categories: Communities

Get started backing up to the cloud with SQL Server Backup to Microsoft Azure Tool

If you’re considering backing up your SQL Server database to the cloud, there are many compelling reasons. Not only will you have an offsite copy of your data for business continuity and disaster recovery purposes, but you can save on CAPEX by using Microsoft Azure for cost-effective storage.  And now, you can choose to backup to Microsoft Azure even for databases that aren’t running the latest version of SQL Server – creating a consistent backup strategy across your database environment. 

SQL Server has these tools and features to help you back up to the cloud:

  • In SQL Server 2014, Managed Backup to Microsoft Azure manages your backup to Microsoft Azure, setting backup frequency based on data activity.  It is available inside the SQL Server Management Studio in SQL Server 2014.
  • In SQL Server 2012 and 2014, Backup to URL provides backup to Microsoft Azure using T-SQL and PowerShell scripting.
  • For prior versions, SQL Server Backup to Microsoft Azure Tool enables you to back up to the cloud all supported versions of SQL Server, including older ones.  It can also be used to provide encryption and compression for your backups – even for versions of SQL Server that don’t support these functions natively.

To show you how easy it is to get started with SQL Server Backup to Microsoft Azure Tool, we’ve outlined the four simple steps you need to follow:

Prerequisites: Microsoft Azure subscription and a Microsoft Azure Storage Account.  You can log in to the Microsoft Azure Management Portal using your Microsoft account.  In addition, you will need to create a Microsoft Azure Blob Storage Container:  SQL Server uses the Microsoft Azure Blob storage service and stores the backups as blobs. 

Step 1: Download the SQL Server Backup to Microsoft Azure Tool, which is available on the Microsoft Download Center.

Step 2: Install the tool. From the download page, download the MSI (x86/x64) to your local machine that has the SQL Server Instances installed, or to a local share with access to the Internet. Use the MSI to install the tool on your production machines. Double click to start the installation. 

Step 3: Create your rules.  Start the Microsoft SQL Server Backup to Microsoft Azure Tool Service by running SQLBackup2Azure.exe.  Going through the wizard to setup the rules allows the program to process the backup files that should be encrypted, compressed or uploaded to Azure storage. The Tool does not do job scheduling or error tracking, so you should continue to use SQL Server Management Studio for this functionality.

On the Rules page, click Add to create a new rule.    This will launch a three screen rule entry wizard.

The rule will tell the Tool what local folder to watch for backup file creation. You must also specify the file name pattern that this rule should apply to.

To store the backup in Microsoft Azure Storage, you must specify the name of the account, the storage access key, and the name of the container.  You can retrieve the name of the storage account and the access key information by logging into the Microsoft Azure management portal.

At this time, you can also specify whether or not you wish to have the backup files encrypted or compressed.

Once you have created one or more rules, you will see the existing rules and the option to Modify or Delete the rule.

Step 4: Restore a Database from a Backup Taken with SQL Server Backup to Microsoft Azure Tool in place. The SQL Server Backup to Microsoft Azure Tool creates a ‘stub’ file with some metadata to use during restore.  Use this file like your regular backup file when you wish to restore a database.  SQL Server uses the metadata from this file and the backup on Microsoft Azure storage to complete the restore. 

If the stub file is ever deleted, you can recover a copy of it from the Microsoft Azure storage container in which the backups are stored.  Place the stub file into a folder on the local machine where the Tool is configured to detect and upload backup files.

That’s all it takes!  Now you’re up and running with Backup to and Restore from Microsoft Azure.

To learn more about why to back up to the cloud, join Forrester Research analyst Noel Yuhanna in a webinar on Database Cloud Backup and Disaster Recovery.  You’ll find out why enterprises should make database cloud backup and DR part of their enterprise database strategy. 

The webinar takes place on Tuesday, 7/29 at 9 AM Pacific time; register now.

Categories: Companies

Regional Setting in a SharePoint Web Application 2013 Using REST API

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 16:46
This article explains how to get the regional settings for a specific web application.
Categories: Communities

Lifecycle of Windows Phone Applications

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 16:15
This article provides an overview of the normal Windows Phone Application Lifecycle.
Categories: Communities

How to Get the List of Features Activated in SharePoint Web Application 2013 Using REST API

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 15:11
This article explains how to get the list of Features enabled for a web application.
Categories: Communities

Code Coverage Metrics That Matter

NCover - Code Coverage for .NET Developers - Thu, 07/24/2014 - 12:07

net-code-coverage-metrics-that-matterThe first rule of code coverage is that not all code coverage metrics are created equal.  In this webinar we discuss three key code coverage metrics that matter: branch coverage, sequence point coverage and the change-risk anti patterns score.  In addition, we cover how all three can work together to provide you a more comprehensive understanding of your code.

This webinar covers how each of the metrics is calculated so that you can use each of them on a more informed basis.  In addition, we discuss how they are useful in managing both code coverage and risk and providing you with measurable feedback on the overall riskiness of your code base.

Code Coverage Metrics That Matter Welcome to the NCover webinar on Code Coverage Metrics That Matter. Today we are going to discuss several key code coverage metrics that you can use in the development of your .NET applications to improve overall code quality and improve the reliability and viability of your .NET applications. After we’ve discussed the metrics we are going to show you how you can immediately find and start using those metrics within the NCover user interface. Once thing that is important to keep in mind when you think about code coverage is the fact that not all code coverage metrics are created equal. At NCover, we believe it is not only important to find the metrics that are most useful in maintaining your code but also understanding how those metrics are calculated. Okay, let’s start with how we measure success as it relates to code coverage within the NCover interface. For us, the most important metric in measuring the success of your testing is branch coverage. Branch coverage represents the percentage of individual code segments, or branches, that were covered during the testing of an application. When we refer to a “branch” we are referring to a segment of code that has exactly one entry point and one exit point. For example, if you are looking at a very simple if / else statement, that would have two distinct branches; the first being if the condition was met and the second being if it was not. We feel that branch coverage is a good measurement of the success of your testing strategy because it lets you know of the potential branches or paths that your software may take, how many of them have been exercised. So if branch coverage is how we measure success, then we want to look at how do we achieve success, how do we increase our total overall branch coverage. The metric we look to achieve this is sequence point coverage. Sequence point coverage is the percentage of sequence points that were covered during the testing of the application. that were covered during the testing of the application. As we will show you in a little bit, one of the most common ways to visualize sequence point coverage is when we drill down through the NCover GUI and go all the way to the source code where we can see, through source code highlighting, which sequence points have been covered in a particular application. If you have had any experience with code coverage metrics before, those are probably two metrics you are relatively used to seeing and understand pretty well. However, that’s only half of the equation. At NCover, we look at not only how well you have tested your code but we look at what is the risk of maintaining that code over time. The metric we use to assess that risk is the change risk anti-patterns score. The change risk anti-patterns score scores the amount of uncovered code against the complexity of that code. So if the change risk anti-patterns score reflects risk, in general, you want to keep this score as low as possible. The way to achieve this requires a balance between two variables, on the one hand you are trying to increase your total code coverage and on the other you are trying to decrease the total complexity of your code base. It’s a fairly well accepted fact that the more complex your code base is the larger the probability that you will have unintended consequences when you make changes to that code base, which means higher support costs, higher development costs and a higher total cost of software over a period of time. Identifying the right metrics and using them effectively within your organization can have several important benefits, including the ability to align teams across shared, common goals, to create a sense of transparency so as you manage the balance between increased testing and reducing risk you know exactly where to focus your efforts and, finally, improving your overall code quality, which as we mentioned before, is really about using finite resources to deliver the best applications possible. Unfortunately, using oversimplified metrics or using them improperly with a failure to really understand where they are coming from, with a failure to really understand where they are coming from, can cause several negative consequences including hiding critical issues within your code base, perhaps associated with a lack of testing or increased complexity, which can lead to a sense of false confidence and, ultimately, waste time, energy and valuable resources. Alright, let’s take a look within NCover and see where you can find these metrics and how you can start using them in your organization. Whether you are using NCover Code Central within the build environment or to aggregate coverage across a team or you are using NCover Desktop or Bolt within the development environment as part of your development process, the approach is the same but we will briefly walk you through both scenarios. Here, we are looking at the dashboard, which is an aggregation of the coverage metrics, with trend charts, across all of our open projects. As you can see, we prominently display branch coverage, sequence point coverage and a variety of the complexity metrics including the change risk anti-patterns score. When we drill into a particular project, we can see each of the code coverage metrics across all of our executions over time or across multiple machines. over time or across multiple machines. All of the metrics are represented with either green or red bars and these are based on user-defined thresholds that you can set across all of the metrics. For branch coverage, you can quickly see your total branch coverage as well as the total number of branch points and those that have been covered. You can also see the same for sequence point coverage. Your total percent, as well as the total number of covered and total available sequence points. In order to better manage the risk of your code, we provide you with a maximum change risk anti-patterns score, now this is across all of the methods within that particular set of code, as well as the number of methods within that set of code that have a change risk anti-patterns score that is in excess of the user-defined acceptable level. Although we provide you with a robust set of code coverage metrics, we also make it very easy for you and your team to select the metrics you want to focus on. By selecting “settings,” you can quickly identify which metrics you want displayed and which metrics you want hidden. It’s worth noting, that even though you may choose to hide a particular metric, the underlying data is still available should you decide you want to look at it later. As you continue to drill down in the NCover interface down to the method level, these metrics become even more useful as you can look at trends across all of your methods, complexity across all of your methods and what areas you may decide to focus either additional development efforts or additional testing efforts on. By drilling down to the source code level, you can quickly identify those areas of code that have and have not been tested. By dragging your mouse either over the actual source code, or the icons representing the individual sequence points, you can quickly identify how your code flows and those branches that still require testing. For developers working within Visual Studio, we extend the power of NCover’s solution directly into the Visual Studio interface through Bolt, our integrated test runner and code coverage solution. Within this interface, you’ll find the same metrics that you find within the NCover Code Central and Desktop user interface. that you find within the NCover Code Central and Desktop user interface. Again, allowing you to quickly identify those segments of code that either represent high risk or require additional testing. By drilling down to the source code level, you can again look, through source code highlighting, at exactly which sequence points and branch points have been tested. Just a quick note, if you are using NCover Bolt in conjunction with NCover Desktop, all of your code coverage data can seamlessly integrate with your project, allowing multiple members to aggregate coverage across a total code set. Regardless of the type of .NET application you are developing, or the size of your team, at NCover, we make code coverage simple. We offer free, 21-day trials of all of our code coverage solutions. All you need to do to get started is visit us at


The post Code Coverage Metrics That Matter appeared first on NCover.

Categories: Companies

Kinect for Windows V2 SDK: Jumping In…

Mike Taulty's Blog - Thu, 07/24/2014 - 11:30
I’m not a big video gamer. Of course, I’ve played one or two (hundred? thousand?) video games all the way back to the 1970s and I had the original PlayStation and then the Xbox and the Xbox 360 but, for me, video gaming is something I might do on a rainy day...(read more)
Categories: Blogs

Unusual Ways of Boosting Up App Performance. Lambdas and LINQs

JetBrains .NET Tools Blog - Thu, 07/24/2014 - 11:10

This is the third post in the series. The previous ones can be found here:

Today, we’re going to uncover the common pitfalls of using lambda expressions and LINQ queries, and explain how you can evade them on a daily basis.

Lambda Expressions

Lambda expressions are a very powerful .NET feature that can significantly simplify your code in particular cases. Unfortunately, convenience has its price. Wrong usage of lambdas can significantly impact app performance. Let’s look at what exactly can go wrong.

The trick is in how lambdas work. To implement a lambda (which is a sort of a local function), the compiler has to create a delegate. Obviously, each time a lambda is called, a delegate is created as well. This means that if the lambda stays on a hot path (is called frequently), it will generate huge memory traffic.

Is there anything we can do? Fortunately, .NET developers have already thought about this and implemented a caching mechanism for delegates. For better understanding, consider the example below:

Caching lambdas 1

Now look at this code decompiled in dotPeek:

Caching lambdas example. Decompiled code

As you can see, a delegate is made static and created only once – LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.

So, what pitfalls should we watch out for? At first glance, this behavior won’t generate any traffic. That’s true, but only as long as your lambda does not contain a closure. If you pass any context (this, an instance member, or a local variable) to a lambda, caching won’t work. It make sense: the context may change anytime, and that’s what closures are made for—passing context.

Let’s look at a more elaborate example. For example, your app uses some Substring method to get substrings from strings:

Lambdas example 1

Let’s suppose this code is called frequently and strings on input are often the same. To optimize the algorithm, you can create a cache that stores results:

Lambdas example 2

At the next step, you can optimize your algorithm so that it checks whether the substring is already in the cache:

Lambdas example 3

The Substring method now looks as follows:

Lambdas example 4

As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate. Let’s look at the decompiled code:

Lambdas example. Decompiled code with no caching

There it is. A new instance of the c__DisplayClass1() is created each time the Substring method is called. The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1.

How to Find

As with any other example in this series, first of all, make sure that a certain lambda causes you performance issues, i.e. generates huge traffic. This can be easily checked in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find delegates that generate significant traffic. Objects of …+c__DisplayClassN are also a hint.
  3. Identify the methods responsible for this traffic.

For instance, if the Substring method from the example above is run 10,000 times, the Memory Traffic view will look as follows:

Lambdas shown in dotMemory

As you can see, the app has allocated and collected 10,000 delegates.

When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect delegate allocation. In our case, the plugin’s warning will look like this:

Warning about lambdas in the HAV plug-in

But once again, data gathered by dotMemory is more reliable, because it shows you whether this lambda is a real issue (i.e. whether it does or does not generates lots of traffic).

How to Fix

Considering how tricky lambda expressions may be, some companies even prohibit using lambdas in their development processes. We believe that lambdas are a very powerful instrument which definitely can and should be used as long as particular caution is exercised.

The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will always be cached with no impact on traffic.

Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look as follows:

Caching lambdas code fix

The updated lambda doesn’t capture any variables; therefore, its delegate should be cached. This can be confirmed by dotMemory:

Labdas caching after the fix shown in dotMemory

As you can see, now only one instance of Func is created.

If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable closure) should be used. For example:

Code example of passing additional context to lambdas

LINQ Queries

As we just saw in the previous section, lambda expressions always assume that a delegate is created. What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and have very similar implementation ‘under the hood.’ This means that all concerns we discussed for lambdas are also true for LINQs.

If your LINQ query contains a closure, the compiler won’t cache the corresponding delegate. For example:

LINQ caching example

As the threshold parameter is captured by the query, its delegate will be created each time the method is called. As with lambdas, traffic from delegates can be checked in dotMemory:

LINQ caching shown in dotMemory

Unfortunately, there’s one more pitfall to avoid when using LINQs. Any LINQ query (as any other query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path, then constant allocation of iterators will generate significant traffic.

Consider this example:

LINQ iterator allocation example

Each time GetLongNames is called, the LINQ query will create an iterator.

How to Find

With dotMemory, finding excessive iterator allocations is an easy task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects from the namespace System.Linq that contain the word “iterator”. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects.
  3. Determine the methods responsible for this traffic.

For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view will look as follows:

LINQ iterator allocation shown in dotMemory

The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they explicitly call LINQ methods. For example:

LINQ iterator allocation warning by the HAV plug-in

How to Fix

Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ query can be replaced with foreach. In our example, a fix could look like this:

LINQ iterator allocation fix example

As no LINQs are used, no iterators will be created.

LINQ iterator allocation fix shown in dotMemory

We hope this series of posts has been helpful. Just in case, the previous two can be found here:

Please follow @dotmemoryjb on Twitter or dotMemory google+ page to stay tuned.

Categories: Companies

How to Send an Email in C# After Configuring the Server

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 11:07
This article describes how to send an email in C#. For sending the email, you need to configure the email services on the server.
Categories: Communities

RavenDB Replication Topology Visualizer

Ayende @ Rahien - Thu, 07/24/2014 - 10:00

Even more goodies are coming in RavenDB 3.0. Below you can see how to visualize the replication topology in a RavenDB Cluster. You can also see that the t5 database is down (marked as red).


This is important, since this gives us the ability to check the status of the topology from the point of view of the actual nodes. So a node might be up for one server, but not for the other, and this will show up here.

Beside, it is a cool graphic that you can use in your system documentation and it is much easier to explain Smile.

Categories: Blogs

How to Remove “Workflow Notification” Text in NINTEX Email Notification

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 09:50
In this article you will see how to remove the “Workflow Notification” text in NINTEX email notification.
Categories: Communities

How I Became a C-Sharpcorner Addict

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 09:36
"Addiction is the continued repetition of a behavior despite adverse consequences or a neurological impairment leading to such behaviors.". See WikipediaR more for details.
Categories: Communities

Angular JS + Rest API + Getting List Data in SharePoint 2013

C-Sharpcorner - Latest Articles - Thu, 07/24/2014 - 09:26
This article explains how to get the data from a SharePoint List using Angular JavaScript and the REST API.
Categories: Communities

Networking is important–or what we are really not good at

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 09:12

virtualbusinessMany of us a software developer work with computers to avoid contact with people. To be fair, we all had our fair share of clients that would not understand why we couldn’t draw red lines with green ink. I understand the reason why would rather stay away from people who don’t understand what we do.

However… (there’s always an however) as I recently started my own business recently, I’ve really started to understand the meaning of building your network and staying in contact with people. While being an MVP has always lead me to meet great people all around Montreal, the real value I saw was when it was a very good contact of mine that introduced me to one of my first client. He knew they needed someone with my skills and directly introduced while skipping all the queues.

You can’t really ask for more. My first client was a big company. You can’t get in there without either being a big company that won a bid, be someone that is renowned or have the right contacts.

You can’t be the big company, you might not ever be someone but you can definitely work on contacts and expanding the amount of people you know.

So what can you do to expand your contacts and grow your network?

Go to user groups

This is killing 2 birds with one stone. First, you learn something new. It might be boring if you already now everything but let me give you a nice trick.

Arrive early and chat with people. If you are new, ask them if they are new too, ask them about their favourite presentation (if any), where they work, whether they like it, etc. Boom. First contact is done. You can stop sweating.

If this person has been here more than once, s/he probably knows other people that you can be introduced.

Always have business cards

I’m a business owner now. I need to have cards. You might think of yourself a low importance developer but if you meet people and impress them with your skills… they will want to know where you hang out.

If your business doesn’t have 50$ to put on you, make your own!  VistaPrint makes those “Networking cards” where you an just input your name, email, position, social network, whatever on them and you can get 500 for less than 50$.

Everyone in the business should have business cards. Especially those that makes the company money.

Don’t expect anything

I know… giving out your card sounds like you want to sell something to people or that you want them to call you back.

When I give my card, it’s in the hope that when they come back later that night and see my card they will think “Oh yeah it’s that guy I had a great conversation with!”. I don’t want them to think I’m there to sell them something.

My go-to phrase when I give it to them is “If you have any question or need a second advice, call me or email me! I’m always available for people like you!”

And I am.

Follow-up after giving out your card

When you give your card and receive another in exchange (you should!), send them a personal email. Tell them about something you liked from the conversation you had and ask them if you could add them on LinkedIn (always good). Seem simple  to salesman but us developers often forget that an email the day after has a very good impact.

People will remember you for writing to them personally with specific details from the conversation.

Yes. That means no “copy/paste” email. Got to make it personal.

If the other person doesn’t have a business card, take the time to note their email and full name (bring a pad!).

Rinse and repeat

If you keep on doing this, you should start to build a very strong network of developers in your city. If you have a good profile, recruiters should also start to notice you. Especially if you added all those people on LinkedIn.

It’s all about incremental growth. You won’t be a superstar tomorrow (and neither am I) but by working at it, you might end-up finding your next job through weird contacts that you only met once but that were impressed by who you are.


So here’s the Too Long Didn’t read version. Go out. Get business cards. Give them to everyone you meet. You intention is to help them, not sell them anything. Repeat often.

But in the long run, it’s all about getting out there. If you want a more detailed read of what real networking is about, you should definitely read Work the Pond by Darcy Rezac. It’s a very good read.

Categories: Blogs

Massive Community Update 2014-07-04

Decaying Code - Maxime Rouiller - Thu, 07/24/2014 - 09:12

So here I go again! We have Phil Haack explaining how he handle tasks in his life with GitHub, James Chamber’s series on MVC and Bootstrap, Visual Studio 2014 Update 3, MVC+WebAPI new release and more!

Especially, don’t miss this awesome series by Tomas Jansson about CQRS. He did an awesome job and I think you guys need to read it!

So beyond this, I’m hoping you guys have a great day!

Must Read

GitHub Saved My Marriage - You've Been Haacked (

James Chamber’s Series

Day 21: Cleaning Up Filtering, the Layout & the Menu | They Call Me Mister James (

Day 22: Sprucing up Identity for Logged In Users | They Call Me Mister James (

Day 23: Choosing Your Own Look-And-Feel | They Call Me Mister James (

Day 24: Storing User Profile Information | They Call Me Mister James (

Day 25: Personalizing Notifications, Bootstrap Tables | They Call Me Mister James (

Day 26: Bootstrap Tabs for Managing Accounts | They Call Me Mister James (

Day 27: Rendering Data in a Bootstrap Table | They Call Me Mister James (


Nodemon vs Grunt-Contrib-Watch: What’s The Difference? (


Update 3 Release Candidate for Visual Studio 2013 (

Test-Driven Development with Entity Framework 6 -- Visual Studio Magazine (


Announcing the Release of ASP.NET MVC 5.2, Web API 2.2 and Web Pages 3.2 (

Using Discovery and Katana Middleware to write an OpenID Connect Web Client | on (

Project Navigation and File Nesting in ASP.NET MVC Projects - Rick Strahl's Web Log (

ASP.NET Session State using SQL Server In-Memory (

CQRS Series (code on GitHub)

CQRS the simple way with eventstore and elasticsearch: Implementing the first features (

CQRS the simple way with eventstore and elasticsearch: Implementing the rest of the features (

CQRS the simple way with eventstore and elasticsearch: Time for reflection (

CQRS the simple way with eventstore and elasticsearch: Build the API with simple.web (

CQRS the simple way with eventstore and elasticsearch: Integrating Elasticsearch (

CQRS the simple way with eventstore and elasticsearch: Let us throw neo4j into the mix (

Ending discussion to my blog series about CQRS and event sourcing (


Michael Feathers - Microservices Until Macro Complexity (

Windows Azure

Azure Cloud Services and Elasticsearch / NoSQL cluster (PAAS) | I'm Pedro Alonso (


Monitoring (

Search Engines (ElasticSearch, Solr, etc.)

Fast Search and Analytics on Hadoop with Elasticsearch | Hortonworks ( This Week In Elasticsearch | Blog | Elasticsearch (

Solr vs. ElasticSearch: Part 1 – Overview | Sematext Blog on (

Categories: Blogs