Skip to content

Feed aggregator

Creating a simple ASP.NET 5 Markdown TagHelper

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

I've been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I've created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
    public class MarkdownTagHelper : TagHelper
        public ModelExpression Content { get; set; }
        public override void Process(TagHelperContext context, TagHelperOutput output)
            output.TagMode = TagMode.SelfClosing;
            output.TagName = null;

            var markdown = Content.Model.ToString();
            var html = CommonMarkConverter.Convert(markdown);
Inspecting the code

Let's start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>    
<markdown content="Markdown"></markdown>    

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we're not going to use anyway). So now we have this:

<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And... that's it. Our code is complete.

Activating it

Now you can't just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

public class MyClass
    public string Markdown { get; set; }
@model MyClass
    ViewData["Title"] = "About";

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere... you can have your user write rich text in any text box and have your application generate safe HTML.

Components used
Categories: Blogs

Should our front-end websites be server-side at all?

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).


So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

Categories: Blogs

You should not be using WebComponents yet

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since... well... a long time.

If you take a look at the specification, it's hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let's take a look at all four of them.

Reviewing the specificationsHTML Templates


This specific specification is not part of the "Web components" section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements


This specification is for review and not for implementation!

Alright no let's not touch this yet.

Shadow DOM


This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports


This one is still a working draft so it hasn't been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it's the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it's built by... well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we're only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it's a painful path.

What is wrong right now with WebComponents

First, it's not cross platform. We handled that in the past. That's not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there's no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn't move, we're back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what "Production" is concerned, do not use WebComponents directly. Also, avoid Polymer as it's only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Categories: Blogs

Fix: Error occurred during a cryptographic operation.

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn't the same used to decrypt?

Boom goes the dynamite.

Categories: Blogs

Renewed MVP ASP.NET/IIS 2015

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I'm so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let's do it again this year!

Categories: Blogs

Failed to delete web hosting plan Default: Server farm 'Default' cannot be deleted because it has sites assigned to it

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

So I had this issue where I was moving web apps between hosting plans. As they were all transferred, I wondered why it refused to delete them with this error message.

After a few click left and right and a lot of wasted time, I found this blog post that provides a script to help you debug and the exact explanation as to why it doesn't work.

To make things quick, it's all about "Deployment Slots". Among other things, they have their own serverFarm setting and they will not change when you change their parents in Powershell (haven't tried by the portal).

Here's a copy of the script from Harikharan Krishnaraju for future references:

Switch-AzureMode AzureResourceManager
$Resource = Get-AzureResource

foreach ($item in $Resource)
	if ($item.ResourceType -Match "Microsoft.Web/sites/slots")
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ParentResource $item.ParentResource -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.ParentResource " for deployment slot " $item.Name ;

	elseif ($item.ResourceType -Match "Microsoft.Web/sites")
		$plan=(Get-AzureResource -Name $item.Name -ResourceGroupName $item.ResourceGroupName -ResourceType $item.ResourceType -ApiVersion 2014-04-01).Properties.webHostingPlan;
		write-host "WebHostingPlan " $plan " under site " $item.Name ;
Categories: Blogs

Switching Azure Web Apps from one App Service Plan to another

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

So I had to do some change to App Service Plan for one of my client. The first thing I was looking for was to do it under the portal. A few clicks and I'm done!

But before I get into why I need to move one of them, I'll need to tell you about why I needed to move 20 of them.

Consolidating the farm

First, my client had a lot of WebApps deployed left and right in different "Default" ServicePlan. Most were created automatically by scripts or even Visual Studio. Each had different instance size and difference scaling capabilities.

We needed a way to standardize how we scale and especially the size on which we deployed. So we came down with a list of different hosting plans that we needed, the list of apps that would need to be moved and on which hosting plan they currently were.

That list went to 20 web apps to move. The portal wasn't going to cut it. It was time to bring in the big guns.


Powershell is the Command Line for Windows. It's powered by awesomeness and cats riding unicorns. It allows you to do thing like remote control Azure, import/export CSV files and so much more.

CSV and Azure is what I needed. Since we built a list of web apps to migrate in Excel, CSV was the way to go.

The Code or rather, The Script

What follows is what is being used. It's heavily inspired of what was found online.

My CSV file has 3 columns: App, ServicePlanSource and ServicePlanDestination. Only two are used for the actual command. I could have made this command more generic but since I was working with apps in EastUS only, well... I didn't need more.

This script should be considered as "Works on my machine". Haven't tested all the edge cases.


Switch-AzureMode AzureResourceManager
$rgn = 'Default-Web-EastUS'

$allAppsToMigrate = Import-Csv $filename
foreach($app in $allAppsToMigrate)
    if($app.ServicePlanSource -ne $app.ServicePlanDestination)
        $appName = $app.App
		    $source = $app.ServicePlanSource
		    $dest = $app.ServicePlanDestination
        $res = Get-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01
        $prop = @{ 'serverFarm' = $dest}
        $res = Set-AzureResource -Name $appName -ResourceGroupName $rgn -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $prop
        Write-Host "Moved $appName from $source to $dest"
Categories: Blogs

Microsoft Virtual Academy Links for 2014

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

So I thought that going through a few Microsoft Virtual Academy links could help some of you.

Here are the links I think deserve at least a click. If you find them interesting, let me know!

Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"?

Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:


This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it.

Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.


Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - Tue, 05/03/2016 - 20:55

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… — Awesome way of describing it.

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

A tour through tool improvements in SQL Server 2016

This post was authored by Ayo Olubeko, Program Manager, Data Developer Group.

Two practices drive successful modern applications today – a fast time to market, and a relentless focus on listening to customers and rapidly iterating on their feedback. This has driven numerous improvements in software development and management practices. In this post, I will chronicle how we’ve embraced these principles to supercharge management and development experiences using SQL Server tooling.

SQL Server 2016 delivers many SQL tools enhancements that converge on the same goal of increasing day-to-day productivity, while developing and managing SQL servers and databases on any platform. This post provides an overview of the improvements and I’ll also drop a few hints about what’s on the way. With SQL Server 2016:

  • It’s easier to access popular tools, such as SQL Server Management Studio (SSMS) and SQL Server Data Tools (SSDT).
  • Monthly releases of new SQL tools make it easy to stay current with new features and fixes.
  • Day-to-day development is being simplified, starting with a new connection experience.
  • New SQL Server 2016 features have a fully guided manageability experience.
  • Automated build and deployment of SQL Server databases can improve your time to market and quality processes.
Finding and using the most popular SQL tools is easier than ever

We received insightful feedback from customers about how difficult it was to find and install tooling for SQL Server, so we’ve taken a few steps to ensure the experience in SQL Server 2016 is as easy as possible.

Free and simple to find and install SQL tools

SQL Server Management Suite

The SQL Server tools download page is the unified place to find and install all SQL Server-related tools. The latest version of SQL tools doesn’t just support SQL Server 2016, but it also supports all earlier versions of SQL Server, so there is no need to install SQL tools per SQL Server version. In addition, you don’t need a SQL Server license to install and use these SQL tools.

SSMS has a new one-click installer that makes it easy to install, whether you’re on a server in your data center or on your laptop at home. Additionally, the installer supports administrative installs for environments not connected to the Internet.

All your SQL tools for Visual Studio in one installer, for whichever version of SQL Server you use

SQL Server Data ToolsSQL Server Data Tools (SSDT) is the name for all your SQL tools installed into Visual Studio. With just one installation of SSDT in Visual Studio 2015, developers can easily integrate efforts to develop applications for SQL Server, Analysis Services, Reporting Services, Integration Services and any application in Visual Studio 2015 for SQL Server 2016 – or older versions as needed.

SSDT replaces/unifies older tools such as BIDS, SSDT-BI and the database-only SSDT, eliminating the confusion about which version of Visual Studio to use. From Visual Studio 2015 and up you’ll have a simple way to install all of the SQL tools you use every day.

Easy to stay current – new features and fixes every month

SQL Server Management StudioOne of the goals for SQL tools is to provide world-class support for your SQL estate wherever it may be. This could be comprised of SQL servers running on-premises or in the cloud, or some fantastic hybrid of both. We support it all. In order to enable world class coverage of this diverse estate, we have adopted a monthly release cadence for our SQL tools. This faster release cycle brings you additional value and improvements – whether it’s enabling functionality to take advantage of new Microsoft Azure cloud features, issuing a bug fix to address particularly painful errors, or even creating a new wizard/dialog to streamline management of your SQL Server.

These stand-alone SSMS releases include an update checker that informs you of newer SSMS releases when they become available. SSDT update notification continues to be fully integrated with Visual Studio’s notification system. You can keep up to date and learn more about the SSMS and SSDT releases at the SQL Server Release Services blog.

Day-to-day development is being simplified, starting with a new connection experience Connection DialogDiscover and seamlessly connect to your databases anywhere

No more need to memorize server and database names. With just a few clicks, the new connection experience in SQL Server Data Tools helps you automatically discover and connect to all your database assets using favorites, recent history or by simply browsing SQL servers and databases on your local PC, network and Azure. You can also pin databases you frequently connect so they’re always there when you need them. In addition, the new connection experience intelligently detects the type of connection you need, automatically configures default properties with sensible values and guides you through firewall settings for SQL Database and Data Warehouse.

Streamline connections to your Azure SQL databases in SSMS

The new firewall rule dialog in SSMS allows you to create an Azure database firewall rule within the context of connecting to your database. You don’t have to login to the Azure portal and create a firewall rule prior to connecting to your Azure SQL Database with SSMS. The firewall rule dialog auto-fills the IP address of your client machine and allows you to optionally whitelist an IP range to allow other connections to the database.

New Firewall Rule

Fully guided management experiences

SQL Server 2016 is packed with advanced, new features including Always Encrypted, Stretch Database, enhancements with In-Memory Table Optimization and new Basic Availability Groups for AlwaysOn — just to name a few. SSMS delivers highly intelligent, easy-to-click-through wizard interfaces that help you enable these new features and make your SQL Server and Database highly secure, highly available and faster in just a few minutes. There’s an easy learning curve, even though the technology that’s under the hood enabling your business is powerful and complex.

Always Encrypted Intro

Adopting DevOps processes with automated build and deployment of SQL Server databases

Features such as the Data-tier Application Framework (DACFx) technology and SSDT have helped make SQL Server the market leader of model-based database lifecycle management technology. DACFx and SSDT offer a comprehensive development experience by supporting all database objects in SQL Server 2016, so developers can develop a database in a declarative way using a database project.

Using Visual Studio 2015, version control and Team Foundation Server 2015 or Visual Studio Team Services in the cloud, developers can automate database lifecycle management and truly adopt a DevOps model for rapid application and database development and deployment.

What’s coming next in your SQL tools

In the months to come, you can look forward to continued enhancements in both SSMS and SSDT that focus on increasing the ease with which you develop and manage data in any SQL platform.

To this end, SSMS will feature performance enhancements and streamlined management and configuration experiences that build on the new capabilities provided by the Visual Studio 2015 shell. Similarly, SSDT will deliver performance improvements and feature support to help database developers handle schema changes more efficiently. Learn more about tooling improvements for SQL Server 2016 in the video below.

Improvements like these can’t happen in a vacuum. Your voice and input are absolutely essential to building the next generation of SQL tools. And the monthly release cycle for our SQL tools allows us to respond faster to the issues you bring to our attention. Please don’t forget to vote on Connect bugs or open suggestions for features you would like to see built.

See the other posts in the SQL Server 2016 blogging series.

Try SQL Server 2016 RC

Categories: Companies

Ignoring files in git, but only locally

SerialSeb - Sebastien Lambla - Tue, 05/03/2016 - 11:50

I love git for one main reason: there are enough advanced commands to make nearly any problem find a solution. Ignoring some files, but only in your repository, is one of those.

For example, when doing terraform, I sometime set development-time variables in the terraform.tfvars file that exist for that exact purpose, say for my AWS credentials, and I really don’t want the file to be committed. All the same, I probably don’t want to be adding that file to the ignore list for everyone, which is the .gitignore file.

Instead, you can add your ignore to .git/info/exclude. I tend to do that from the command line.

$ echo 'myfolder/terraform.tfvars' >> .git/info/exclude
Categories: Blogs

The design of RavenDB 4.0: Making Lucene reliable

Ayende @ Rahien - Tue, 05/03/2016 - 10:00

I don’t like Lucene. It is an external dependency that works in somewhat funny ways, and the version we use is a relatively old one that has been mostly ported as-is from Java. This leads to some design decisions that are questionable (for example, using exceptions for control flow in parsing queries), or just awkward (by default, an error in merging segments will kill your entire process). Getting Lucene to run properly in production takes quite a bit of work and effort. So I don’t like Lucene.

We have spiked various alternatives to Lucene multiple times, but it is a hard problem, and most solutions that we look at lead toward pretty much the same approach that Lucene does it.By now, we have been working with Lucene for over eight years, so we have gotten good in managing it, but there are still quite a bit of code in RavenDB that is decided to managing Lucene’s state, figuring out how to recover in case of errors, etc.

Just off the top of my head, we have code to recover from aborted indexing, background processes that takes regular backups of the indexes, so we’ll be able to restore them in the case of an error, etc. At some point we had a lab of machines that were dedicated to testing that our code was able to manage Lucene properly in the presence of hard resets. We got it working, eventually, but it was hard. And we still get issues from users that into trouble because Lucene can tie itself into knots (for example, a disk full error midway through indexing can corrupt your index and require us to reset it). And that is leaving aside the joy of I/O re-ordering does to you when you need to ensure reliability.

So the problem isn’t with Lucene itself, the problem is that it isn’t reliable. That led us to the Lucene persistence format. While Lucene persistent mode is technically pluggable, in practice, this isn’t really possible. The file format and the way it works are very closely tied to the idea of files. Actually, the idea of process data as a stream of bytes. At some point, we thought that it would be good to implement a Transactional NTFS Lucene Directory, but that idea isn’t really viable, since that is going away.

It was at this point that we realized that we were barking at the entirely wrong tree. We already have the technology in place to make Lucene reliable: Voron!

Voron is a low level storage engine that offers ACID transactions. All we need to do is develop VoronLuceneDirectory, and that should handle the reliability part of the equation. There are a couple of details that needs to be handled, in particular, Voron needs to know, upfront, how much data you want to write, and a single value in Voron is limited to 2GB. But that is fairly easily done. We write to a temporary file from Lucene, until it tells us to commit. At which point we can write it to Voron directly (potentially breaking it to multiple values if needed).

Voila, we have got ourselves a reliable mechanism for storing Lucene’s data. And we can do all of that in a single atomic transaction.

When reading the data, we can skip all of the hard work and file I/O and serve it directly from Voron’s memory map. And having everything inside a single Voron file means that we can skip doing things like the compound file format Lucene is using, and chose a more optimal approach.

And with a reliable way to handle indexing, quite large swaths of code can just go away. We can now safely assume that indexes are consistent, so we don’t need to have a lot of checks on that, startup verifications, recovery modes, online backups, etc.

Improvement by omission indeed.

Categories: Blogs

Scaling Up And Scaling Out In Azure Web Sites

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article you will learn how to manage Scaling Up and Scaling Out in Azure Web Sites,
Categories: Communities

Connect To An Azure Subscription From The Azure Command Line Interface (Azure CLI) In Windows

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article you will learn how to connect to an Azure subscription from the Azure Command-Line Interface (Azure CLI) in Windows Operating System.
Categories: Communities

Why Choose AngularJS For Mobile Application Development

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article you will learn why to choose AngularJS for Mobile Application Development.
Categories: Communities

Let&#39;s Make A Complete Holographic App With Unity

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article you will learn how to make a complete Holographic App with Unity.
Categories: Communities

Auto Suggest With Spell Check And Quick Fix: Visual Studio 2015 Update 2

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article you will learn about the new feature “Auto Suggest with Spell Check” of Visual Studio 2015 update 2.
Categories: Communities

Learn AngularJS From the Beginning - Part Two

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article, we will discuss the basics of the AngularJS filter. This is part two of the article series.
Categories: Communities

Android Wearable: Creating A Simple Survey Feature Using Xamarin

C-Sharpcorner - Latest Articles - Tue, 05/03/2016 - 08:00
In this article you will learn how to create a simple Survey feature using Xamarin for Android Wearable.
Categories: Communities