For proven in-memory technology without costly add-ons, migrate your Oracle databases to SQL Server 2014
Today, we are making available a new version of SQL Server Migration Assistant (SSMA), a free tool to help customers migrate their existing Oracle databases to SQL Server 2014. Microsoft released SQL Server 2014 earlier this year, after months of customer testing, with features such as In-Memory OLTP to speed up transaction performance, In-Memory Columnstore to speed up query performance, and other great hybrid cloud features such as backup to cloud directly from SQL Server Management Studio and the ability to utilize Azure as a disaster recovery site using SQL Server 2014 AlwaysOn.
Available now, the SQL Server Migration Assistant version 6.0 for Oracle databases, greatly simplifies the database migration process from Oracle databases to SQL Server. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing to reduce cost and reduce risk of database migration projects. Moreover, SSMA version 6.0 for Oracle databases brings additional features such as automatically moving Oracle tables into SQL Server 2014 in-memory tables, the ability to process 10,000 Oracle objects in a single migration, and increased performance in database migration and report generation.
Many customers have realized the benefits of migrating their database to SQL Server using previous versions of SSMA. For example:
- Dollar Thrifty Automotive Group migrated their rental car rate engine and saves $135,000 annually
- Sumitomo Rubber Industries migrated their 21 mission critical systems from an Oracle database to SQL Server and cut software licensing costs by half.
- G&T Conveyor saves 83 percent on ERP by moving from an Oracle database to SQL Server
SSMA for Oracle is designed to support migration from Oracle 9i or later version to all editions of SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012 and SQL Server 2014. The SSMA product team is also available to answer your questions and provide technical support at email@example.com
If you’re considering backing up your SQL Server database to the cloud, there are many compelling reasons. Not only will you have an offsite copy of your data for business continuity and disaster recovery purposes, but you can save on CAPEX by using Microsoft Azure for cost-effective storage. And now, you can choose to backup to Microsoft Azure even for databases that aren’t running the latest version of SQL Server – creating a consistent backup strategy across your database environment.
SQL Server has these tools and features to help you back up to the cloud:
- In SQL Server 2014, Managed Backup to Microsoft Azure manages your backup to Microsoft Azure, setting backup frequency based on data activity. It is available inside the SQL Server Management Studio in SQL Server 2014.
- In SQL Server 2012 and 2014, Backup to URL provides backup to Microsoft Azure using T-SQL and PowerShell scripting.
- For prior versions, SQL Server Backup to Microsoft Azure Tool enables you to back up to the cloud all supported versions of SQL Server, including older ones. It can also be used to provide encryption and compression for your backups – even for versions of SQL Server that don’t support these functions natively.
To show you how easy it is to get started with SQL Server Backup to Microsoft Azure Tool, we’ve outlined the four simple steps you need to follow:
Prerequisites: Microsoft Azure subscription and a Microsoft Azure Storage Account. You can log in to the Microsoft Azure Management Portal using your Microsoft account. In addition, you will need to create a Microsoft Azure Blob Storage Container: SQL Server uses the Microsoft Azure Blob storage service and stores the backups as blobs.
Step 1: Download the SQL Server Backup to Microsoft Azure Tool, which is available on the Microsoft Download Center.
Step 2: Install the tool. From the download page, download the MSI (x86/x64) to your local machine that has the SQL Server Instances installed, or to a local share with access to the Internet. Use the MSI to install the tool on your production machines. Double click to start the installation.
Step 3: Create your rules. Start the Microsoft SQL Server Backup to Microsoft Azure Tool Service by running SQLBackup2Azure.exe. Going through the wizard to setup the rules allows the program to process the backup files that should be encrypted, compressed or uploaded to Azure storage. The Tool does not do job scheduling or error tracking, so you should continue to use SQL Server Management Studio for this functionality.
On the Rules page, click Add to create a new rule. This will launch a three screen rule entry wizard.
The rule will tell the Tool what local folder to watch for backup file creation. You must also specify the file name pattern that this rule should apply to.
To store the backup in Microsoft Azure Storage, you must specify the name of the account, the storage access key, and the name of the container. You can retrieve the name of the storage account and the access key information by logging into the Microsoft Azure management portal.
At this time, you can also specify whether or not you wish to have the backup files encrypted or compressed.
Once you have created one or more rules, you will see the existing rules and the option to Modify or Delete the rule.
Step 4: Restore a Database from a Backup Taken with SQL Server Backup to Microsoft Azure Tool in place. The SQL Server Backup to Microsoft Azure Tool creates a ‘stub’ file with some metadata to use during restore. Use this file like your regular backup file when you wish to restore a database. SQL Server uses the metadata from this file and the backup on Microsoft Azure storage to complete the restore.
If the stub file is ever deleted, you can recover a copy of it from the Microsoft Azure storage container in which the backups are stored. Place the stub file into a folder on the local machine where the Tool is configured to detect and upload backup files.
That’s all it takes! Now you’re up and running with Backup to and Restore from Microsoft Azure.
To learn more about why to back up to the cloud, join Forrester Research analyst Noel Yuhanna in a webinar on Database Cloud Backup and Disaster Recovery. You’ll find out why enterprises should make database cloud backup and DR part of their enterprise database strategy.
The webinar takes place on Tuesday, 7/29 at 9 AM Pacific time; register now.
The first ruleÂ of code coverage is that not all code coverage metrics are created equal. Â In this webinar we discuss threeÂ key code coverage metrics that matter: branch coverage, sequence point coverage and the change-risk anti patterns score. Â In addition, we cover how all three can work together to provide you a more comprehensive understanding of your code.
This webinar covers how each of the metrics is calculated so that you can use each of them on a more informed basis. Â In addition, we discuss how they are useful in managing both code coverage and risk and providing you with measurable feedback on the overall riskiness of your code base.Code Coverage Metrics That Matter Welcome to the NCover webinar on Code Coverage Metrics That Matter. Today we are going to discuss several key code coverage metrics that you can use in the development of your .NET applications to improve overall code quality and improve the reliability and viability of your .NET applications. After weâ€™ve discussed the metrics we are going to show you how you can immediately find and start using those metrics within the NCover user interface. Once thing that is important to keep in mind when you think about code coverage is the fact that not all code coverage metrics are created equal. At NCover, we believe it is not only important to find the metrics that are most useful in maintaining your code but also understanding how those metrics are calculated. Okay, letâ€™s start with how we measure success as it relates to code coverage within the NCover interface. For us, the most important metric in measuring the success of your testing is branch coverage. Branch coverage represents the percentage of individual code segments, or branches, that were covered during the testing of an application. When we refer to a â€śbranchâ€ť we are referring to a segment of code that has exactly one entry point and one exit point. For example, if you are looking at a very simple if / else statement, that would have two distinct branches; the first being if the condition was met and the second being if it was not. We feel that branch coverage is a good measurement of the success of your testing strategy because it lets you know of the potential branches or paths that your software may take, how many of them have been exercised. So if branch coverage is how we measure success, then we want to look at how do we achieve success, how do we increase our total overall branch coverage. The metric we look to achieve this is sequence point coverage. Sequence point coverage is the percentage of sequence points that were covered during the testing of the application. that were covered during the testing of the application. As we will show you in a little bit, one of the most common ways to visualize sequence point coverage is when we drill down through the NCover GUI and go all the way to the source code where we can see, through source code highlighting, which sequence points have been covered in a particular application. If you have had any experience with code coverage metrics before, those are probably two metrics you are relatively used to seeing and understand pretty well. However, thatâ€™s only half of the equation. At NCover, we look at not only how well you have tested your code but we look at what is the risk of maintaining that code over time. The metric we use to assess that risk is the change risk anti-patterns score. The change risk anti-patterns score scores the amount of uncovered code against the complexity of that code. So if the change risk anti-patterns score reflects risk, in general, you want to keep this score as low as possible. The way to achieve this requires a balance between two variables, on the one hand you are trying to increase your total code coverage and on the other you are trying to decrease the total complexity of your code base. Itâ€™s a fairly well accepted fact that the more complex your code base is the larger the probability that you will have unintended consequences when you make changes to that code base, which means higher support costs, higher development costs and a higher total cost of software over a period of time. Identifying the right metrics and using them effectively within your organization can have several important benefits, including the ability to align teams across shared, common goals, to create a sense of transparency so as you manage the balance between increased testing and reducing risk you know exactly where to focus your efforts and, finally, improving your overall code quality, which as we mentioned before, is really about using finite resources to deliver the best applications possible. Unfortunately, using oversimplified metrics or using them improperly with a failure to really understand where they are coming from, with a failure to really understand where they are coming from, can cause several negative consequences including hiding critical issues within your code base, perhaps associated with a lack of testing or increased complexity, which can lead to a sense of false confidence and, ultimately, waste time, energy and valuable resources. Alright, letâ€™s take a look within NCover and see where you can find these metrics and how you can start using them in your organization. Whether you are using NCover Code Central within the build environment or to aggregate coverage across a team or you are using NCover Desktop or Bolt within the development environment as part of your development process, the approach is the same but we will briefly walk you through both scenarios. Here, we are looking at the dashboard, which is an aggregation of the coverage metrics, with trend charts, across all of our open projects. As you can see, we prominently display branch coverage, sequence point coverage and a variety of the complexity metrics including the change risk anti-patterns score. When we drill into a particular project, we can see each of the code coverage metrics across all of our executions over time or across multiple machines. over time or across multiple machines. All of the metrics are represented with either green or red bars and these are based on user-defined thresholds that you can set across all of the metrics. For branch coverage, you can quickly see your total branch coverage as well as the total number of branch points and those that have been covered. You can also see the same for sequence point coverage. Your total percent, as well as the total number of covered and total available sequence points. In order to better manage the risk of your code, we provide you with a maximum change risk anti-patterns score, now this is across all of the methods within that particular set of code, as well as the number of methods within that set of code that have a change risk anti-patterns score that is in excess of the user-defined acceptable level. Although we provide you with a robust set of code coverage metrics, we also make it very easy for you and your team to select the metrics you want to focus on. By selecting â€śsettings,â€ť you can quickly identify which metrics you want displayed and which metrics you want hidden. Itâ€™s worth noting, that even though you may choose to hide a particular metric, the underlying data is still available should you decide you want to look at it later. As you continue to drill down in the NCover interface down to the method level, these metrics become even more useful as you can look at trends across all of your methods, complexity across all of your methods and what areas you may decide to focus either additional development efforts or additional testing efforts on. By drilling down to the source code level, you can quickly identify those areas of code that have and have not been tested. By dragging your mouse either over the actual source code, or the icons representing the individual sequence points, you can quickly identify how your code flows and those branches that still require testing. For developers working within Visual Studio, we extend the power of NCoverâ€™s solution directly into the Visual Studio interface through Bolt, our integrated test runner and code coverage solution. Within this interface, youâ€™ll find the same metrics that you find within the NCover Code Central and Desktop user interface. that you find within the NCover Code Central and Desktop user interface. Again, allowing you to quickly identify those segments of code that either represent high risk or require additional testing. By drilling down to the source code level, you can again look, through source code highlighting, at exactly which sequence points and branch points have been tested. Just a quick note, if you are using NCover Bolt in conjunction with NCover Desktop, all of your code coverage data can seamlessly integrate with your project, allowing multiple members to aggregate coverage across a total code set. Regardless of the type of .NET application you are developing, or the size of your team, at NCover, we make code coverage simple. We offer free, 21-day trials of all of our code coverage solutions. All you need to do to get started is visit us at www.ncover.com.
This is the third post in the series. The previous ones can be found here:
- Unusual Ways of Boosting Up App Performance. Boxing and Collections
- Unusual Ways of Boosting Up App Performance. Strings
Today, we’re going to uncover the common pitfalls of using lambda expressions and LINQ queries, and explain how you can evade them on a daily basis.Lambda Expressions
Lambda expressions are a very powerful .NET feature that can significantly simplify your code in particular cases. Unfortunately, convenience has its price. Wrong usage of lambdas can significantly impact app performance. Letâ€™s look at what exactly can go wrong.
The trick is in how lambdas work. To implement a lambda (which is a sort of a local function), the compiler has to create a delegate. Obviously, each time a lambda is called, a delegate is created as well. This means that if the lambda stays on a hot path (is called frequently), it will generate huge memory traffic.
Is there anything we can do? Fortunately, .NET developers have already thought about this and implemented a caching mechanism for delegates. For better understanding, consider the example below:
Now look at this code decompiled in dotPeek:
As you can see, a delegate is made static and created only once â€“ LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.
So, what pitfalls should we watch out for? At first glance, this behavior wonâ€™t generate any traffic. Thatâ€™s true, but only as long as your lambda does not contain a closure. If you pass any context (this, an instance member, or a local variable) to a lambda, caching wonâ€™t work. It make sense: the context may change anytime, and that’s what closures are made forâ€”passing context.
Letâ€™s look at a more elaborate example. For example, your app uses some Substring method to get substrings from strings:
Let’s suppose this code is called frequently and strings on input are often the same. To optimize the algorithm, you can create a cache that stores results:
At the next step, you can optimize your algorithm so that it checks whether the substring is already in the cache:
The Substring method now looks as follows:
As you pass theÂ local variable x to the lambda, the compiler is unable to cache a created delegate. Letâ€™s look at the decompiled code:
There it is. A new instance of the c__DisplayClass1() is created each time the Substring method is called. TheÂ parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1.How to Find
As with any other example in this series, first of all, make sure that a certain lambda causes you performance issues, i.e. generates huge traffic. This can be easily checked in dotMemory.
- Open a memory snapshot and select the Memory Traffic view.
- Find delegates that generate significant traffic. Objects of â€¦+c__DisplayClassN are also a hint.
- Identify the methods responsible for this traffic.
For instance, if the Substring method from the example above is run 10,000 times, the Memory Traffic view will look as follows:
As you can see, the app has allocated and collected 10,000 delegates.
When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect delegate allocation. In our case, the pluginâ€™s warning will look like this:
But once again, data gathered by dotMemory is more reliable, because it shows you whether this lambda is a real issue (i.e. whether it does or does not generates lots of traffic).How to Fix
Considering how tricky lambda expressions may be, some companies even prohibit using lambdas in their development processes. We believe that lambdas are a very powerful instrument which definitely can and should be used as long as particular caution is exercised.
The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will always be cached with no impact on traffic.
Thus, for our example, one solution is to not pass theÂ parameter x to the lambda. The fix would look as follows:
The updated lambda doesn’t capture any variables; therefore, its delegate should be cached. This can be confirmed by dotMemory:
As you can see, now only one instance of Func is created.
If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable closure) should be used. For example:
As we just saw in the previous section, lambda expressions always assume that a delegate is created. What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and have very similar implementation ‘under the hood.’ This means that all concerns we discussed for lambdas are also true for LINQs.
If your LINQ query contains a closure, the compiler wonâ€™t cache the corresponding delegate. For example:
As the threshold parameter is captured by the query, its delegate will be created each time the method is called. As with lambdas, traffic from delegates can be checked in dotMemory:
Unfortunately, thereâ€™s one more pitfall to avoid when using LINQs. Any LINQ query (as any other query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path, then constant allocation of iterators will generate significant traffic.
Consider this example:
Each time GetLongNames is called, the LINQ query will create an iterator.How to Find
With dotMemory, finding excessive iterator allocations is an easy task:
- Open a memory snapshot and select the Memory Traffic view.
- Find objects from theÂ namespace System.Linq that contain the word â€śiteratorâ€ť. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects.
- Determine the methods responsible for this traffic.
For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view will look as follows:
The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they explicitly call LINQ methods. For example:
How to Fix
Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ query can be replaced with foreach. In our example, a fix could look like this:
As no LINQs are used, no iterators will be created.
We hope this series of posts has been helpful. Just in case, the previous two can be found here:
- Unusual Ways of Boosting Up App Performance. Boxing and Collections
- Unusual Ways of Boosting Up App Performance. Strings
Even more goodies are coming in RavenDB 3.0. Below you can see how to visualize the replication topology in a RavenDB Cluster. You can also see that the t5 database is down (marked as red).
This is important, since this gives us the ability to check the status of the topology from the point of view of the actual nodes. So a node might be up for one server, but not for the other, and this will show up here.
Beside, it is a cool graphic that you can use in your system documentation and it is much easier to explain .
Many of us a software developer work with computers to avoid contact with people. To be fair, we all had our fair share of clients that would not understand why we couldnâ€™t draw red lines with green ink. I understand the reason why would rather stay away from people who donâ€™t understand what we do.
Howeverâ€¦ (thereâ€™s always an however) as I recently started my own business recently, Iâ€™ve really started to understand the meaning of building your network and staying in contact with people. While being an MVP has always lead me to meet great people all around Montreal, the real value I saw was when it was a very good contact of mine that introduced me to one of my first client. He knew they needed someone with my skills and directly introduced while skipping all the queues.
You canâ€™t really ask for more. My first client was a big company. You canâ€™t get in there without either being a big company that won a bid, be someone that is renowned or have the right contacts.
You canâ€™t be the big company, you might not ever be someone but you can definitely work on contacts and expanding the amount of people you know.
So what can you do to expand your contacts and grow your network?Go to user groups
This is killing 2 birds with one stone. First, you learn something new. It might be boring if you already now everything but let me give you a nice trick.
Arrive early and chat with people. If you are new, ask them if they are new too, ask them about their favourite presentation (if any), where they work, whether they like it, etc. Boom. First contact is done. You can stop sweating.
If this person has been here more than once, s/he probably knows other people that you can be introduced.Always have business cards
Iâ€™m a business owner now. I need to have cards. You might think of yourself a low importance developer but if you meet people and impress them with your skillsâ€¦ they will want to know where you hang out.
If your business doesnâ€™t have 50$ to put on you, make your own! VistaPrint makes those â€śNetworking cardsâ€ť where you an just input your name, email, position, social network, whatever on them and you can get 500 for less than 50$.
Everyone in the business should have business cards. Especially those that makes the company money.Donâ€™t expect anything
I knowâ€¦ giving out your card sounds like you want to sell something to people or that you want them to call you back.
When I give my card, itâ€™s in the hope that when they come back later that night and see my card they will think â€śOh yeah itâ€™s that guy I had a great conversation with!â€ť. I donâ€™t want them to think Iâ€™m there to sell them something.
My go-to phrase when I give it to them is â€śIf you have any question or need a second advice, call me or email me! Iâ€™m always available for people like you!â€ť
And I am.Follow-up after giving out your card
When you give your card and receive another in exchange (you should!), send them a personal email. Tell them about something you liked from the conversation you had and ask them if you could add them on LinkedIn (always good). Seem simple to salesman but us developers often forget that an email the day after has a very good impact.
People will remember you for writing to them personally with specific details from the conversation.
Yes. That means no â€ścopy/pasteâ€ť email. Got to make it personal.
If the other person doesnâ€™t have a business card, take the time to note their email and full name (bring a pad!).Rinse and repeat
If you keep on doing this, you should start to build a very strong network of developers in your city. If you have a good profile, recruiters should also start to notice you. Especially if you added all those people on LinkedIn.
Itâ€™s all about incremental growth. You wonâ€™t be a superstar tomorrow (and neither am I) but by working at it, you might end-up finding your next job through weird contacts that you only met once but that were impressed by who you are.Conclusion
So hereâ€™s the Too Long Didnâ€™t read version. Go out. Get business cards. Give them to everyone you meet. You intention is to help them, not sell them anything. Repeat often.
But in the long run, itâ€™s all about getting out there. If you want a more detailed read of what real networking is about, you should definitely read Work the Pond by Darcy Rezac. Itâ€™s a very good read.
So here I go again! We have Phil Haack explaining how he handle tasks in his life with GitHub, James Chamberâ€™s series on MVC and Bootstrap, Visual Studio 2014 Update 3, MVC+WebAPI new release and more!
Especially, donâ€™t miss this awesome series by Tomas Jansson about CQRS. He did an awesome job and I think you guys need to read it!
So beyond this, Iâ€™m hoping you guys have a great day!Must Read code on GitHub)