I got a lot of really great answers about my âWhere do old developers go?â, Iâm feeling much better about this now .
Now let turn this question around, instead of asking what is going on in the industry, letâs check what is going on with you. In particular, do you have a career plan at all?
An easy way to check that is asking: âWhat are you going to do in 3 years, in 7 years and in 20 years from now?â
Of course, best laid plans of mice and men will often go awry, plans for the futures are written in sand on a stormy beach and other stuff like this. Any future planning has to include the caveats that they are just plans, with reality and life getting in the way.
For lawyers*, the career path might be: Trainee, associate, senior associate, junior partner, partner, named partner. (* This is based solely on seeing some legal TV shows, not actual knowledge.) Most lawyers donât actually become named partners, obviously, but that is what you are planning for.
As discussed in the previous post, a lot of developers move to management positions at some point in their careers, mostly because salaries and benefits tend to flat line after about ten years or so for most people in the development track. Others decide that going independent and becoming consultants or contractors is a better way to increase their income. Another path is to rise in the technical track in a company that recognize technical excellence, those are usually pure tech companies, it is rare to have such positions in non technical companies. Yet another track that seems to be available is the architect route, this one is available in non tech companies, especially big ones. You have the startup route, and the Get Rich Burning Your Twenties mode, but that is a high risk / high reward, and people who think about career planning tend to avoid such things unless carefully considered.
It is advisable to actually consider those options, try to decide what options youâll want to have available for you in the next 5 â 15 years, and take steps accordingly. For example, if you want to go in the management track, youâll want to work on thinks like peopleâs skill, be able to fluently converse with the business in their own terms and learn to play golf. Youâll want to try to have leadership positions from a relatively early start, so team lead is a stepping stone youâll want to get to, for example. There is a lot of material on this path, so Iâm not going to cover this in details.
If you want to go with the Technical Expert mode, that means that you probably need to grow a beard (there is nothing like stroking a beard in quite contemplation to impress people). More seriously, youâll want to get a deep level of knowledge in several fields, preferably ones that you can tie together into a cohesive package. For example, networks expert would be able to understand how TCP/IP work and be able to actually make use of that when optimize an HTML5 app. Crucial at this point is also the ability to actually transfer that knowledge to other people. If you are working within a company, that increase the overall value you have, but a lot of the time, technical experts would be consultants. Focusing on a relatively narrow field gives you a lot more value, but narrow your utility. Remember that updating your knowledge is very important. But the good news is that if you have a good grasp of basics, you can get to grips with new technology very easily.
The old timer mode fits people who work in big companies and who believe that they can carve a niche in that company based on their knowledge of the companyâs business and how things actually work. This isnât necessarily the one year experience, repeated 20 times, although in many cases, that seems to be what happens. Instead, it is a steady job with reasonable hours, and you know the business well enough and the environment in which you are working with, that you can just get things done, without a lot of fussing around. Change is hard, however, because those places tend to be very conservative. Then again, you can do new systems in whatever technology you want, at a certain point (you tend to become the owner of certain systems, youâve been around longer than the people who are actually using the system). That does carry a risk, however. You can be fired for whatever reason (merger, downsizing, etc) and youâll have hard time finding equivalent position.
The entrepreneur mode is for people who want to build something. That can be a tool or a platform, and they create a business selling that. A lot of the time, it involve a lot of technical work, but there is a huge amount of stuff that needs to be done that is non technical. Marketing and sales, insurance and taxes, hiring people, etc. The good thing about this is that you usually donât have to have a very big investment in your product before you can start selling it. We are talking about roughly 3 â 6 months for most things, for 1 â 3 people. That isnât a big leap, and in many cases, you can handle this by eating some savings, or moonlighting. Note that this can completely swallow your life, but you are your own boss, and there is a great deal of satisfaction around building a product around your vision. Be aware that you need to have contingency plans around for failure and success. If your product become successful, you need to make sure that you can handle the load (hire more people, train them, etc).
The startup mode is very different than the entrepreneur mode. In startup, you are focused on getting investments, and the scope is usually much bigger. There is less of a risk financially (you usually have investors for that), but there is a much higher risk of failure, and there is usually a culture that consider throwing yourself on hand grenade advisable. The idea is that you are going to burn yourself on both ends for two to four years, and in return, youâll have enough money to maybe stop working altogether. I consider this foolish, given the success rates, but there are a lot of people who consider that to be the only way worth doing. The benefits usually include a nice environment, both physically and professionally, but it comes with the expectation that youâll stay there for so many hours, it is your second home.
There are other modes and career paths, but now Iâve to return to my job .
Today we’re excited to announce the general availability of PostSharp 4, the newest version of our 100%-compatible productivity extension to C# and VB. PostSharp allows developers and architects to automate the implementation of design patterns by encapsulating them into reusable components named aspects. Unlike other development productivity tools, PostSharp does not just make it easier to type code, but results in a smaller codebase that is easier to understand and has fewer defects.
You can download PostSharp 4 today from Visual Studio Gallery and update your projects using NuGet.
PostSharp 4 applies the success of pattern-driven development to the realm of multithreaded applications and provides a thread-safe extension to C# and VB. Application developers can annotate their classes with custom attributes such as [Immutable], [Actor] or [Synchronized] and have their code automatically validated against the model. Any violations result in build-time errors or run-time exceptions instead of data corruptions.
PostSharp 4 makes it incredibly easy to implement undo/redo in desktop and mobile applications by automatically recording changes in model objects. Additionally, it includes more than 15 other improvements and enhancements.
Information in this release announcement includes:
- Threading Models;
- Aggregation and Composition;
- Other Improvements;
- More Free Aspects in PostSharp Express;
- Platforms Newly Supported;
- Platforms No Longer Supported.
When comparing modern object-oriented programming to assembly language, one can see that notions like classes, fields, variables or methods really simplify developers’ lives and increase their productivity. But when it comes to multithreading, one cannot state that object oriented programming (OOP) delivers so much value. As a result, multithreading is still being addressed at a low level of abstraction with locks, events or interlocked accesses. The resulting cost for the industry, in terms of development effort and number of defects, is enormous.
Alternatives to OOP like functional programming and actor programming have been proposed to address the challenges of multithreading. However, these paradigms are not as appropriate as OOP for business and mobile applications. They have failed to gain wide adoption, existing only in narrow niches where high performance wins over high development cost.
Instead of trying to replace OOP, PostSharp 4 extends the C# and VB languages with custom attributes like [Immutable], [ThreadAffine], [Synchronized] or [Actor], which allow developers to assign a threading model to a class. Based on these custom attributes, the compiler extension validates the code against the model and generates synchronization or dispatching instructions.
PostSharp 4 implements the following threading models:
- Immutable and Freezable;
- Synchronized and Reader-Writer Synchronized;
- Thread Affine and Thread Unsafe.
Thanks to threading models, source code is safer and simpler. Threading models bring the same kind of productivity gains than the stricter C#/VB memory model brought to C++. Source code is more concise and easier to reason about and there are far fewer chances for non-deterministic data races. Threading models make it easier for everyone on the team, not just for multithreading experts, to build robust business applications.Why is this a significant innovation?
We believe that our approach to multithreading is innovative in many ways, and that its significance exceeds the scope of the PostSharp community. Points of interest include the focus on business applications and average developers (instead of critical software and expert developers); the idea that several threading models can co-exist in the same application as an extension of a mainstream language (instead of requiring a rewrite in a specialized language); and the use of UML aggregation as a foundation to select the synchronization granularity.
See our technical white paper for more details about our approach. We think this new approach could be applied to other object-oriented languages and environments.Additional Resources
For more information, see:
- Blog post: Immutable and Freezable Done Well
- Technical White Paper: A Thread-Safe Extension to Object-Oriented Programming
- Reference Documentation: PostSharp Threading Pattern Library
A feature that always scores at the top of each the wish list of each application is undo/redo. Oddly enough, the feature usually remains unaddressed for years because it is awfully painful to implement using conventional technologies. It simply generates too much boilerplate code.
PostSharp 4 takes the boilerplate code out of way. Just add the [Recordable] attribute to your model classes, and they will start appending any change into a Recorder. You can then use the Undo and Redo methods of the Recorder, or add the off-the-shelf WPF buttons to your toolbar.What are the benefits?
PostSharp 4 makes undo/redo affordable for virtually any application, even line-of-business software that typically has a large object model and a small user base. Previously, the feature had a prohibitive cost and would pay off only in simple applications or software with a large customer base.
The Recordable pattern offers more flexibility than the traditional Memento pattern, and its implementation can be completely automated by the compiler. It demonstrates how better compiler technologies can lead to simpler source code and reduced development effort.Additional Resources
For more information, see:
- Blog post: Undo/Redo – part 1, part 2, part 3, part 4
- Reference Documentation: Implementing Undo/Redo
When we built our threading models and undo/redo feature, we realized that we needed a notion of parent-child relationships. When we kept thinking deeper about that, it became clear that most object designs actually relied on this notion. It’s not surprising if one considers that aggregation and composition are core concepts of the UML specification. Yet, it’s a pity it has not been implemented in programming languages. We needed to fix that.
Classes that can be involved in parent-child relationships must be annotated with the [Aggregatable] custom attribute, or with any other attribute that implicitly provides the Aggregatable aspect. Then fields that hold a reference to another object must be annotated with [Child], [Reference] or [Parent].
The principal role of the Aggregatable aspect is simply to automatically set the parent property when a child is assigned to a parent and to implement a child visitor.
For more information, see:
- Blog post: Aggregation and Composition Patterns
- Reference Documentation: Implement Parent/Child Relationships
In addition to the previously described new features, we significantly enhanced the PostSharp Aspect Framework and worked on other components as well:
- 4x runtime performance enhancement on our NotifyPropertyChanged aspect.
- Improved reliability and scope of our deadlock detection policy.
- Dynamic advices: see IAdviceProvider, IntroduceInterface, ImportLocation, ImportMethod, IntroduceMethod.
- Aspect Repository and Late Validation.
- OnInstanceConstructed advice.
- Faster advice state lookup: see DeclarationIdentifier.
PostSharp 4 generalizes the idea of giving our ready-made pattern libraries for free for small projects. PostSharp Express 4 now includes the following features for free – for up to 10 classes per Visual Studio project:
- Recordable, EditableObject (undo/redo)
- Aggregatable, Disposable
- Threading (threading models, dispatching)
- Code Contracts
- Logging (up to 50 methods)
If you like the new features and want to use them more, you can buy them as a part of PostSharp Ultimate or as an individual product. More on our pricing page.Platforms Newly Supported
PostSharp 4 adds full support for:
- Win RT both on Windows 8 (the previous support was error-prone and needed a complete redesign) and Windows Phone 8.1.
- Windows Phone 8.1 “Silverlight”.
- C++ “mixed mode” assembly references.
We decided that new releases of PostSharp would not support platforms for which Microsoft no longer provides mainstream support at the time of a PostSharp RTM release.
On development workstations and build machines, PostSharp 4 no longer supports:
- Windows XP (Vista, 7 or 8 is required).
- Visual Studio 2010 (2012 or 2013 is required).
- .NET Framework 4.0 (4.5 is required).
On end-user devices, PostSharp 4 no longer supports:
- .NET Framework 2.0 (3.5, 4.0 or 4.5 is required).
- Windows Phone 7 (8 or 8.1 is required).
- Silverlight 4 (5 is required).
Note that PostSharp 3.1 still supports these platforms.Summary
While Microsoft has lately been catching up with its competitors in the consumer market, enterprise developers may feel neglected. At PostSharp, our focus is to increase productivity of large .NET development teams. It’s only when you start counting lines of code with 6 or 7 digits – and you realize that every line may cost your employer between $10 and $20 – that you appreciate and receive the full benefits of automating design patterns.
PostSharp 4 marks a significant innovation in the realm of multithreading. It raises the level of abstraction by defining models so the machine can do more and humans must think less. But instead of requiring developers to switch to a different language, it respects their investments in C# and VB and extends the environment in a backward compatible fashion.
PostSharp 4 includes plenty of important new features including undo/redo, parent-child relationships, performance improvements, new advices and more.
It’s an exciting time to be a developer.
A while back we blogged about our plans to make EF7 a lightweight and extensible version of EF that enables new platforms and new data stores. We also talked about our EF7 plans in the Entity Framework session at TechEd North America.
Prior to EF7 there are two ways to store models, in the xml-based EDMX file format or in code. Starting with EF7 we will be retiring the EDMX format and having a single code-based format for models. A number of folks have raised concerns around this move and most of it stems from misunderstanding about what a statement like âEF7 will only support Code Firstâ really means.
Code First is a bad name
Prior to EF4.1 we supported the Database First and Model First workflows. Both of these use the EF Designer to provide a boxes-and-lines representation of a model that is stored in an xml-based .edmx file. Database First reverse engineers a model from an existing database and Model First generates a database from a model created in the EF Designer.
In EF4.1 we introduced Code First. Understandably, based on the name, most folks think of Code First as defining a model in code and having a database generated from that model. In actual fact, Code First can be used to target an existing database or generate a new one. There is tooling to reverse engineer a Code First model based on an existing database. This tooling originally shipped in the EF Power Tools and then, in EF6.1, was integrated into the same wizard used to create EDMX models.
Another way to sum this up is that rather than a third alternative to Database & Model First, Code First is really an alternative to the EDMX file format. Conceptually, Code First supports both the Database First and Model First workflows.
ConfusingâŠ we know. We got the name wrong. Calling it something like âcode-based modelingâ would have been much clearer.
Is code-base modeling better?
Obviously there is overhead in maintaining two different model formats. But aside from removing this overhead, there are a number of other reasons that we chose to just go forward with code-based modeling in EF7.
- Source control merging, conflicts, and code reviews are hard when your whole model is stored in an xml file. Weâve had lots of feedback from developers that simple changes to the model can result in complicated diffs in the xml file. On the other hand, developers are used to reviewing and merging source code.
- Developers know how to write and debug code. While a designer is arguably easier for simple tasks, many projects end up with requirements beyond what you can do in the designer. When it comes time to drop down and edit things, xml is hard and code is more natural for most developers.
- The ability to customize the model based on the environment is a common requirement we hear from customers. This includes scenarios such as multi-tenant database where you need to specify a schema or table prefix that is known when the app starts. You may also need slight tweaks to your model when running against a different database provider. Manipulating an xml-based model is hard. On the other hand, using conditional logic in the code that defines your model is easy.
- Code based modeling is less repetitive because your CLR classes also make up your model and there are conventions that take care of common configuration. For example, consider a Blog entity with a BlogId primary key. In EDMX-based modeling you would have a BlogId property in your CLR class, a BlogId property (plus column and mapping) specified in xml and some additional xml content to identify BlogId as the key. In code-based modeling, having a BlogId property on your CLR class is all that is needed.
- Providing useful errors is also much easier in code. Weâve all seen the âError 3002: Problem in mapping fragments starting at line 46:âŠâ errors. The error reporting on EDMX could definitely be improved, but throwing an exception from the line of code-based configuration that caused an issue is always going to be easier.
We should note that in EF6.x you would sometimes get these unhelpful errors from the Code First pipeline, this is because it was built over the infrastructure designed for EDMX, in EF7 this is not the case.
There is also an important feature that could have been implemented for EDMX, but was only ever available for code-based models.
- Migrations allows you to create a database from your code-based model and evolve it as your model changes over time. For EDMX models you could generate a SQL script to create a database to match your current model, but there was no way to generate a change script to apply changes to an existing database.
So, what will be in EF7?
In EF7 all models will be represented in code. There will be tooling to reverse engineer a model from an existing database (similar to whatâs available in EF6.x). You can also start by defining the model in code and use migrations to create a database for you (and evolve it as your model changes over time).
We should also note that weâve made some improvements to migrations in EF7 to resolve the issues folks encountered trying to use migrations in a team environment.
Weâve covered all the reasons we think code-based modeling is the right choice going forwards, but there are some legitimate questions this raises.What about visualizing the model?
The EF Designer was all about visualizing a model and in EF6.x we also had the ability to generate a read-only visualization of a code-based model (using the EF Power Tools). Weâre still considering what is the best approach to take in EF7. There is definitely value in being able to visualize a model, especially when you have a lot of classes involved.
With the advent of Roslyn, we could also look at having a read/write designer over the top of a code-based model. Obviously this would be significantly more work and itâs not something weâll be doing right away (or possibly ever), but it is an idea weâve been kicking around.What about the âUpdate model from databaseâ scenario?
âUpdate model from databaseâ is a process that allows you to incrementally pull additional database objects (or changes to existing database objects) into your EDMX model. Unfortunately the implementation of this feature wasnât great and you would often end up losing customizations you had made to the model, or having to manually fix-up some of the changes the wizard tried to apply (often dropping to hand editing the xml).
For Code First you can re-run the reverse engineer process and have it regenerate your model. This works fine in basic scenarios, but you have to be careful how you customize the model otherwise your changes will get reverted when the code is re-generated. There are some customizations that are difficult to apply without editing the scaffolded code.
Our first step in EF7 is to provide a similar reverse engineer process to whatâs available in EF6.x â and that is most likely what will be available for the initial release. We do also have some ideas around pulling in incremental updates to the model without overwriting any customization to previously generated code. These range from only supporting simple additive scenarios, to using Roslyn to modify existing code in place. Weâre still thinking through these ideas and donât have definite plans as yet.What about my existing models?
Weâre not trying to hide the fact that EF7 is a big change from EF6.x. Weâre keeping the concepts and many of the top level APIs from past versions, but under the covers there are some big changes. For this reason, we donât expect folks to move existing applications to EF7 in a hurry. We are going to be continuing development on EF6.x for some time.
We have another blog post coming shortly that explores how EF7 is part v7 and part v1 and the implications this has for existing applications.
Is everyone going to like this change?
Weâre not kidding ourselves, itâs not possible to please everyone and we know that some folks are going to prefer the EF Designer and EDMX approach over code-based modeling.
At the same time, we have to balance the time and resources we have and deliver what we think is the best set of features and capabilities to help developers write successful applications. This wasnât a decision we took lightly, but we think itâs the best thing to do for the long-term success of Entity Framework and its customers â the ultimate goals being to provide a faster, easier to use stack and reduce the cost of adding support for highly requested features as we move forward.
We are working to make Azure the best cloud platform for big data, including Apache Hadoop. To accomplish this, we deliver a comprehensive set of solutions such as our Hadoop-based solution Azure HDInsight and managed data services from partners, including Hortonworks. Last week Hortonworks announced the most recent milestone in our partnership and yesterday we announced even more data options for our Azure customers through a partnership with Cloudera.
Cloudera is recognized as a leader in the Hadoop community, and that’s why we’re excited Cloudera Enterprise has achieved Azure Certification. As a result of this certification, organizations will be able to launch a Cloudera Enterprise cluster from the Azure Marketplace starting October 28. Initially, this will be an evaluation cluster with access to MapReduce, HDFS and Hive. At the end of this year when Cloudera 5.3 releases, customers will be able to leverage the power of the full Cloudera Enterprise distribution including HBase, Impala, Search, and Spark.
We’re also working with Cloudera to ensure greater integration with Analytics Platform System, SQL Server, Power BI and Azure Machine Learning. This will allow organizations to build big data solutions quickly and easily by using the best of Microsoft and Cloudera, together. For example Arvato Bertelsmann was able to help clients cut fraud losses in half and speed credit calculations by 1,000x.
Our partnership with Cloudera allows customers to use the Hadoop distribution of their choice while getting the cloud benefits of Azure. It is also a sign of our continued commitment to make Hadoop more accessible to customers by supporting the ability to run big data workloads anywhere – on hosted VM’s and managed services in the public cloud, on-premise or in hybrid scenarios.
From Strata in New York to our recent news from San Francisco it’s exciting times ahead for those in the data space. We hope you join us for this ride!
General Manager, Data Platform
I went to the super market yesterday, and I forgot to get out of work mode, so here is this posts. The grocery store checkout model exercise deals with the following scenario. You have a customer that is scanning products in a self checkout lane, and you need to process the order.
In terms of external environment, you have:
- ProductScanned ( ProductId: string ) event
- Complete Order command
- Products ( Product Id â> Name, Price ) dataset
So far, this is easy, however, you also need to take into account:
- Sales (1+1, 2+1, 5% off for store brands, 10% off for store brands for loyalty card holders).
- Purchase of items by weight (apples, bananas, etc).
- Per customer discount for 5 items.
- Rules such as alcohol can only be purchased after store clerk authorization.
- Purchase limits (can only purchase up to 6 items of the same type, except for specific common products)
The nice thing about such an exercise is that it forces you to see how many things you have to juggle for such a seemingly simple scenario.
A result of this would be to see how you would handle relatively complex rules. Given the number of rules we already have, it should be obvious that there are going to be more, and that they are going to be changing on a fairly frequent basis. A better model would be to actually do this over time. So you start with just getting the first part, then you start streaming the other requirements, but what you actually see is the changes in the code over time. So each new requirement causes you to make modifications and accommodate the new behavior.
The end result might be a Git repository that allows you to see the full approach that was used and how it changed over time. Ideally, you should see a lot of churn in the beginning, but then youâll have a lot less work to do as your architecture settles down.
Itâs a simple thing, and it will make it immediately obvious when one of your files contains accidental indentation tabs instead of the spaces that should replace them, or trailing spaces. All IDE and code editors have an option to show whitespace. I always have it enabled. The subtle glyphs that will materialize the spaces and tabs are hardly noticeable while youâre working, except when something unusual is where it shouldnât be:
Hereâs where to find the option in Visual Studio (you can also toggle it using CTRL+R, CTRL+W):
And here it is in WebStorm:
We are pretty much always looking for new people, what is holding us back from expanding even more rapidly is the time that it takes to get to grips with our codebases and what we do here. But that also means that we usually have at least one outstanding job offer available, because it takes a long time to fill it. But that isnât the topic for this post.
I started programming in school, I was around 14 / 15 at the time, and I picked up a copy of VB 3.0 and did some fairly unimpressive stuff with it. I count my time as a professional since around 1999 or so. That is the time when I started actually dedicating myself to learning programming as something beyond a hobby. That was 15 years ago.
When we started doing a lot of interviews, I noticed that we had the following pattern regarding developersâ availabilities:
That sort of made sense, some people got into software development for the money and left because it didnât interest them. From the history of Hibernating Rhinos, one of our developers left and is now co-owner in a restaurant, another is a salesman for lasers and other lab stuff.
However, what doesnât make sense is the ratio that Iâm seeing. Where are the people who have been doing software development for decades?
Out of the hundreds of CVs that I have seen, there has been less than 10 that had people over the age of 40. I donât recall anyone over the age of 50. Note that Iâm somewhat biased to hire people with longer experience, because that often means that they donât need to learn what goes under the hood, they already know.
In fact, looking at the salary tables, there actually isnât a level of higher than 5 years. After that, you have a team leader position, and then you move into middle management, and then you are basically gone as a developer, Iâm guessing.
What is the career path you have as a developer? And note that Iâm explicitly throwing out management positions. It seems that those are very rare in our industry.
Microsoft has the notion of Distinguished Engineer and Technical Fellow, for people who actually have decades of experience. In my head, I have this image of a SWAT team that you throw at the hard problems .
Outside of very big companies, those seem to be very rare. And that is kind of sad.
In Hibernating Rhinos, we plan to actually have those kind of long career paths, but youâll need to ask me in 10 â 20 years how that turned out to be.