The convenience of .NET

Richard Lander

Convenient options are available for almost every task in life, from getting a ride to the airport to writing code. Convenience is the idea that a great solution is available when you want it and that it works for you. As designers of the .NET platform, we aim to provide convenient solutions for many tasks and to improve the convenience of writing apps with each new release.

This post kicks off a new series, exploring convenient solutions to common tasks. Productivity, performance, security, and reliability are hallmark design points of the .NET platform. We described them in detail in our recent Why .NET? post. Stephen Toub also published his annual performance post, Performance Improvements in .NET 8. This post (and the ones that will follow) explores the ideas and features discussed in those other posts in terms of convenient solutions. You’ll see a combination of high-level utility APIs that offer a nice balance of those design points and lower-level APIs that enable you to achieve a different balance per your needs.

The next posts go into much more detail on specific API families, with a lot of code and performance numbers, to fully explore these convenient solutions.

Let’s start the series with a more general exploration of how the .NET platform delivers on convenience.

Convenience is a spectrum

I like using the terms “convenience” and “control”, to describe the two ends of the “convenience spectrum”. Convenience is descriptive of the experience of writing code and control of your ability to define its behavior.

The most convenient code is compact and straightforward, often with at most a few options to vary behavior (as “choice” is itself a complexity). File.ReadAllText() is a good example. It returns the contents of a (text) file as a string that you can read and process. The lowest-level code enables a lot of flexibility, control, and performance optimization, but requires more careful use. Looking at you, File.OpenHandle() and RandomAccess.Read(), which together expose an operating system handle with very little getting in your way to read through a file (as bytes) with maximum performance.

I’m going to show you a couple lists of APIs. It’s OK if they don’t look familiar. These APIs start with the most raw concepts and formats (the highest degree of control) and end with the most packaged and refined concepts (the most convenient).

Convenience spectrum for reading a text file:

  • Most control: File.OpenHandle + RandomAccess.Read
  • More convenient: File.Open + FileStream.Read
  • Even more convenient: File.OpenText + StreamReader.ReadLine
  • Even more convenient: File.ReadLines + IEnumerable<string>
  • Even more convenient: File.ReadAllLines + string[]
  • Most convenience: File.ReadAllText + string

Convenience spectrum for reading JSON text:

  • Most control: Utf8JsonReader + Pipelines or Stream
  • More convenient: JsonDocument + Stream
  • Even more convenient: JsonSerializer + Stream
  • Most convenient: JsonSerializer + string

Note: The APIs are listed as the primary API + their most likely companion API or type.

A key takeaway is that there is no clear break between convenient and control patterns in these lists. The end of one convenience pattern overlaps with the start of the next control pattern. One person’s convenience is another’s control. That’s the definition of a spectrum.

Convenience starts with choice

You might wonder why we need all these APIs. They all do the same thing, right? The first is that each of these options is the right tool for the job in different circumstances and is convenient for that circumstance. In fact, the .NET developer community consistently requests a broad sprectrum of APIs from us and we’re happy to deliver them. The second is that we had to build the low-level APIs in order to make the high-level ones. It’s a lot like towers of lego blocks. In theory, we could have exposed only the high-level APIs by making all the low-level ones private, but that’s neither desirable nor practical in the general case.

Some developer stacks primarily expose high-level APIs that are built on native code libraries, but are missing useful lower-level APIs. The native code libraries are often written in a way that makes it impractical to expose the lower-level APIs to a managed language, so they are not. That’s quite limiting. With .NET, we have a strong philosophy that the library functionality we build should be written in C#, which means that both high- and low-level APIs are available for you to use. It also means you can read the code of all the APIs you use in C# (on GitHub), like the File class.

Of course, there are places where we haven’t exposed all of the layers; every new API we expose is something we’ll have effectively forever, and requires design and direct testing and maintenance and documentation and compatibility constraints and so on. We’re thus selective in which layers we expose when, and are constantly re-evaluating whether additional support should be exposed. The previously mentioned File.ReadAllText, for example, has been around for many, many years, whereas the cited RandomAccess.Read was only recently introduced. Our long-term trend has been to make lower-level APIs available where there is a compelling case.

Convenience enables collaboration

The .NET libraries exposes a broad set of functionality for you to use. In many cases (like with the File type), much of the related functionality is exposed in one place and designed to work as a larger coherent system. That means you can use more convenient APIs in one part of your code and higher-control APIs elsewhere and it can all be made to work together, nicely.

“Hey … I’m going to be writing this data to a Stream with APIs that give me the control we need for our service. You can use StreamReader.ReadLineAsync to read it. If that doesn’t work, I’ll expose an IAsyncEnumerable<string> for each line and you can use await foreach as a streaming solution. Either option works for me. I love how straightforward all of these options are. It is super easy to connect our code together and it’s all super fast and convenient.” — .NET dev at ACME Solutions.

Developers working in teams can make different (and equally good) convenience choices at different layers within a larger codebase, with straightforward patterns to connect those layers.

I’m in control

Yes, yes. The point here isn’t to pick a point on this spectrum and stick to it for all the code you write. Instead, the intent is to select APIs that satify the requirements of the algorithm at hand, even if you have the skills to write more challenging code that may be better on some metric (which may or may not matter). The person who maintains your code next might not have your same skills and may (incorrectly) conclude the pattern you chose is a requirement when its not.

We use convenient APIs in some places in .NET libraries, even though they are not the maximum speed. They makes the code small, simple and easy to understand and that can be more valuable than maximum speed.

That’s what one of our architects had to say about our approach to our codebase, even in a team dedicated to high performance. We like to write convenient code whenever we can. We’d rather focus our efforts on building more features and optimizing APIs that are likely to get called in a hot loop.

The other side of the coin is that the more efficient the convenience APIs are, the more we’ll be able to use them without concern in our codebase. It makes the team as a whole more efficient. We try to make convenience APIs as efficient as possible within the confines of what the shape of the API allows.

Breaking the spectrum

There are a few cases where a single API covers the majority of use cases. This only happens when an API with a simple contract is an absolute workhorse and is required by a lot of scenarios.

The string class APIs are a key example. IndexOf and IndexOfAny are two favorites. We use these APIs pervasively in the .NET platform and they are used just as much by .NET developers. You can see how many PRs have targeted those APIs.

Many of the IndexOf{Any} calls are actually on spans now, rather than direct calls to string.IndexOf{Any}. While the spans are frequently pointing into strings, these APIs often operate on slices (after calling string.AsSpan, internally).

This family of APIs have been improved a lot, using multiple techniques to improve performance. For example, these APIs uses vector CPU instructions to search for search terms in a string. In .NET 8, support for AVX512 was added. That’s not yet relevant for most hardware, however it means that IndexOf will be ready for newer hardware when you’ve got it.

We’ll dive into IndexOfAny in much more depth in System.IO post. It’s a great API.

Closing

The .NET team has a “big tent” philosophy. We want every developer to find APIs that are approachable and suitable. If you are new to programming, we’ve got APIs for you. If you are more familiar with low-level APIs, we’ve got APIs that are likely familar.

I’m looking forward to sharing some in-depth analysis and exploration of the convenience spectrum in upcoming posts and hope that it leads to an an interesting discussion. If anything, this exercise has given me the insight on how much I appreciate the spectrum of these APIs. Perhaps our documentation should be updated to describe each topic area in terms of this specturm.

Thanks to David Fowler, Jan Kotas, and Stephen Toub for their help contributing to these posts.

You can keep up to date with this series by subscribing to the Convenience of .NET tag feed in your favorite RSS reader or subscribe to the entire blog via email below.

72 comments

Discussion is closed. Login to edit/delete existing comments.

  • Maximilien Noal 11

    This is exactly what enables a project I contribute to, which is an emulator.

    The low-level emulation uses keywords such as stackalloc, Span, fixed*, LibraryImport, threads, ManualResetEvent, IntPtr (not often !), and ReaderWriterLockSlim.

    The high level UI uses high-level constructs such as async/await, File.ReadAllText, DataTemplates, Bindings, Reflection, Source Generators, File/Folder pickers etc…

    It would not have been possible without both.

    • Richard LanderMicrosoft employee 7

      The wide range of public and intentionally designed surface area enables a lot of useful scenarios, like what you are describing.

  • Mystery Man 2

    With respect, I think now is not the time to talk about convenience and control when .NET has a last-mile problem.

    Nobody has adopted .NET 5 or later, not even the Microsoft Windows team. The only products written in the modern .NET are PowerShell, PowerToys, and Paint.net. And that’s it. The platform represents landslide innovations, but without adoption, it’ll remain a joke. An innovative joke, to be sure, but a joke nonetheless.

    • Richard LanderMicrosoft employee 18

      Oh! I’ve got some exciting reading for you then. Check out these compelling posts on the success that Microsoft teams have had adopting the latest releases. They were so happy with the results that they wanted to share their experiences with you and others in the community.

      https://devblogs.microsoft.com/dotnet/category/developer-stories/

      Here’s a nice call out from the Windows Store team using .NET 8 RC1, in our recent System.Text.Json post.

      https://devblogs.microsoft.com/dotnet/system-text-json-in-dotnet-8/#size-reduction

      Customers regularly tell us similar things about their success. We also see download numbers for .NET 8 previews are even higher than prior releases.

      I hope this new information changes your mind.

      • Greg Sohl 3

        While I don’t agree with Mystery Man, I am both excited and concerned about the pace of change and what I perceive as challenges with LTS. We struggle to find time to migrate a couple of large .NET Framework applications. Thankfully the LTS for .NET 4.6+ has been VERY long. But there are huge challenges to migrating big apps that use many 3rd party components and the communications platform Microsoft led us to use – WCF.

        • Richard LanderMicrosoft employee 9

          I sympathize with you. Our philosophy has changed a lot since the .NET Framework 4.6+ days.

          Here’s what we do now, in general:

          • Treat version to version migration as a deliverable, with folks paying attention to that.
          • Adopt industry solutions as much as possible (like gRPC).
          • When there is a question on a behavior, bias to giving choice to the developer.

          This approach has been working. For example, the number of Windows Forms users on .NET 6+ (presumably a lot of migrations from .NET Framework) is quite high. That’s a bright spot I didn’t expect. Third party components and dependencies like WCF and Web Forms can make migrations more challenging.

          As you say, .NET Framework has served you well. It remains our position that .NET Framework is a great place for many apps, particularly those with deep dependencies on Windows technologies/ecosystem.

          • Russell Hires 0

            Do you have a post regarding your versioning approach/philosophy? The current rate of change is exhausting, and some large organizations are having trouble keeping up, or understanding that first bullet point (version to version migration as a deliverable), which means that they end up with out of support versions running things, and no intention to make (what can almost be trivial) changes to more current versions.

        • Stephen Cleary 7

          Just gonna pop in here and mention CoreWCF: https://github.com/CoreWCF/CoreWCF

        • MJ 3

          .NET 5 -> 6 -> 7 -> now 8 (RC), has been more or less painless here, sure there are a few breaking changes, but nothing that requires days or weeks of work to fix, even on really big code bases.

          I think you will gain a lot by spending the time required to move away from the .NET Framework that is bound to Windows, don’t think you will find much happening there until it’s EOL. The same goes for many third-party libraries I think you will start to eventually run into EOL issues there. If you can’t migrate over as is, then start decoupling things, where you can utilize the more modern framework for new features.

      • Paulo Pinto 3

        Unfortunely not all Microsoft teams, Visual Studio, Dynamics, SharePoint, SQL Server CLR integration come to mind.

        We are still mostly doing .NET Framework to this day, because Microsoft partners like Sitecore have their main products stuck on .NET Framework, while moving to other stacks for their new acquisitions, instead of doing rewrites.

        While I am always eager to play around with newer .NET versions on my private projects, at the agency, the amount of cloud products built on .NET that require custom integration work delivered by the agency, has been steadly decreasing as those cloud vendors keep adopting UNIX first languages for their cloud products.

        Advocating for .NET on some of those RPFs feels like being on Asterix’s village.

        Also the recent management missteps at Unity have killed a major reason for many studios to adopt .NET on their new projects post 2024.

        • David KeanMicrosoft employee 16

          I can’t speak for the other teams, but I can speak for Visual Studio. While you are correct that the core process (“devenv”) of Visual Studio sits on top of .NET Framework, most of its satellite process already do, or are moving to .NET very soon (.NET 8.0 in fact!), these processes are the things that drives IntelliSense, IntelliCode, Identity Management, Find and Replace, Debugging Diagnostics, etc.

          -Dave
          VS Team

          • Paulo Pinto 1

            What about the plugins ecosystem?

          • Russell Hires 2

            If someone on your team could write up a blog post about this, I would love to get more detail and info on your migration path, challenges you’ve had, and success.

          • Artem Grunin 0

            Come on, how many years it took to move every thing but main stuff to Core? You must be really joking if you think if it is a success story. DotNet will never be successful until one-dotnet appear.

        • MJ 1

          I’m personally not seeing this, I see a lot of people using C# for workloads on Linux (containers), esp. for new cloud-native solutions.

          Don’t think the Unity situation either has much effect, their main competitor for 2D/(3D) is probably Godot, which already support .NET (C#), with the .NET Core runtime (not just the older stuff), but there is also things like FNA for C# if Godot isn’t your thing, Unreal has already been C++ for ages, so not much changed for their main highend 3D engine competitor.

          I’m not sure what you define as UNIX first, if you mean Rust or Go, they work just as well on Windows, and really aren’t UNIX first, more than .NET Core would be.

      • Freman Bregg 1

        I was drawn to .NET Core 5 because there seemed to be plans to include some kind of compatibility with Java and JVM. Nothing in this direction three versions later. MAUI is a disaster. WPF is still the good old WPF, with all its good and bad points.

        I work for banks and stock exchanges. NO ONE of my clients is currently using any flavor of .NET, though I do consider .NET is infinitely better that all of the Java crap.

        So yes, guys, you have a BIG problem selling .NET. All those linked use cases are internal Microsoft products, by the way.

      • Reelix 1

        Now check out the compelling posts on how the team developing VS has had success in adopting the latest releases!

        … Right?

      • Nikolay Zdravkov 8

        I love .NET and how it’s developing. I want to see VisualStudio running on .NET core, and I hope that’s in Microsoft plans. With no recent announcements it seems that we will need to wait long time upgrade to happen.

      • Mystery Man 0

        I hope this new information changes your mind.

        Change my mind about what? The fact that Windows 11 ships with .NET 2.0 instead of 7.0? The fact that apps that should be 400 kB in size are now 400 MB? Don’t count on it.

        Please don’t ignore the writings on the wall, least you share the fate of Internet Explorer. .NET has an adoption problem. The company that has made it isn’t using it. And we’re talking about a company whose internal policy was eating one’s own dog food.

        • Richard LanderMicrosoft employee 5

          The fact that Windows 11 ships with .NET 2.0 instead of 7.0?

          It is an intentional choice to NOT include .NET Core in Windows. We’ve been asked to do that many times.

          The fact that apps that should be 400 kB in size are now 400 MB? Don’t count on it.

          This is a strong claim. You’re going to need to show receipts on that. Please share an app on GitHub that has this behavior.

          BTW: I’m been using native AOT recently. It’s capable of producing container images in the range of 10MB. It’s quite impressive.

          • Mystery Man 1

            Ha! I was hoping you’d challenge me because my numbers are real-world examples.

            Windows Registry editor is approximately ~400 kB (362 kB EXE plus 40 kB MUI file). PowerToys includes a Registry Preview add-on written with .NET 6 that has grown to 400 MB.

            The situation became so severe that PowerToys version 0.72 resorted to using shared DLLs, reducing their disk footprint from 3.10GB to 554 MB. However, they are still forced to ship the entire .NET Desktop Runtime with the app. Initially, they did it the old way, i.e., installing a separate, shared .NET runtime (the way we do it C++ Runtime) but eventually gave up. The installer was finicky.

          • Richard LanderMicrosoft employee 4

            @Mystery man … Apologies … this thread is now so deeply nested, I cannot directly reply to your last statement.

            Great example and topic. Yes, self-contained apps are a lot bigger, for obvious reasons. However, the 400MB definitely contains a lot more than just .NET binaries.

            To make the comparison, apples to apples, we’d need to see what PowerToys looks like with a framework-dependent install. That would tell the true story, as I expect we’d see PowerToys is still sizable w/o including the .NET runtime. It’s similarly fine to note (and even complain) that PowerToys is large. It’s (in part) a function of .NET not being available in the Windows Store. We’ve considered it, but we haven’t had enough requests to make it happen.

          • Mystery Man 4

            No problem. I understand the limitations of this blog.

            Now, you know my concern, and I know where you stand. Let’s leave it at that. And if anybody asks, I’ll tell them, “Richard Lander has a good grasp of the subject matter and remained professional and courteous the whole time.”

          • Matthew Bonanno 3

            Consider this my request for .NET in the Microsoft Store (Windows Store)! 😊

        • MJ 2

          There is a world outside just Windows, so not shipping .NET Core with Windows, has been one of the best decisions, so new releases are not tied to specific Windows releases or have to wait for new Windows releases, but can be done out of bounds, for all operating systems at the same time, since the main benefit of .NET Core is that it’s cross-platform.

          400MB is not common, even for large code bases on .NET Core, unless you are bundling the whole framework with the executable.

    • Martin Enzelsberger 8

      We made the switch to .NET Core 3 from our 4.7 app that we run on Azure and pretty much upgraded to every new .NET version once it was out and we found time to do so. Currently on .NET 7 and hopefully get to upgrade to 8 this year. (looking forward to some of those System.Text.Json features)

      It’s a boon that .NET is developed so actively and releases come so frequently, rather than every couple of years. Breaking changes are always documented well, so upgrades are not a problem once you’ve cut ties with the old framework and any other legacy libraries.

      • MJ 0

        Yeah, this has been our experience as well, as long as you get over to the .NET core side of things, then each upgrade from there to newer versions, requires minimal work, that is properly documented, 5->8(rc) has been more or less painless here, with great speedups each update.

        Even if you stick with just the LTS releases you have plenty of time (3 years), to do an honestly pretty lightweight upgrade, it’s not even remotely close to the process from .NET 4.6 -> .NET core.

        Personally I really like the release pace we currently have, where you can opt into non-LTS releases to get new features faster.

    • Karsten Krug 5

      Enthusiastic .NET Developer since .NET 1.1 here. Although I have adopted (or still trying to adopt) .NET Core/5+ without any resentment but rather honest interest, I must admit that to me it falls short of what Microsoft achieved with the “old” NET Framework back in the day, especially with versions 2.0 and 3.0.

      WinForms was right out of the box just an awesome Desktop UI Framework with an up until this day unrivaled UI Designer. It was easy to adopt for folks familiar with Win32-API as it was “just” a layer on top, but also a huge step forward. It made things like MFC look like an anachronism. I’m not surprised in the least that it still has its place today.
      WebForms was equally awesome, a door opener for me personally having been a desktop developer exclusivly before. It was “fat”, yes (cough ViewState cough), but also robust and conceptionally easy to grasp. It made coding my first internal web app fun and a breeze.
      WCF made interprocess communication (in my case) via HTTP/SOAP extremely easy, the tooling is still excellent today. Everythings “just works” (props to the guys working on WCF Core btw). I have one WCF Service out that has been in v1.0 ever since, never a problem.
      ADO.NET was almost self explanatory, lightning fast and I still has its place today in libraries like Dapper who use it as their foundation.
      WPF took desktop development to a new level by offering its own rendering and a markup language for UI Design in contrast to the code serialization technique used by the WinForms Designer. Steep learning curve for sure, some concepts a bit too abstract, but still productive (boosted by frameworks like Prism) and with huge potential.

      Now please don’t judge me by taking this seemingly nostalgic trip down memory lane. Although I’m obviously not in my 20s anymore i’ve never clinged to tech from “the good ole days” which becomes more tempting with age as we all know. I kept my curiosity and really love everything Blazor for example and have been promoting its usage within the company I work for.

      But in my perception the “old” .NET Framework technologies I mentioned were – even though not being fanciest or shiniest – conceptionally very well thought out, rock solid stable, had excellent tooling, were easy to learn (WPF being the exception) and offered a real and immediately obvious added value to everything that was there before.

      .NET Core/5+ seems rushed with ever changing APIs and standards. Almost as if MS Devs were more catering to themselves but to run of the mill devs like myself who have to spend most of their time trying to put .NET to good use rather than dealing with .NET intrinsic conceptional stuff. Just one simple example: I have literally lost count how many times application builder logic (you know … all the program.cs stuff adding and configuring services) has been changed between .NET Core/5+ versions. I just got familiar with the startup class (and its subpar reflection) concept. Next time I look everything is happening within program.cs with top level statements. And many extension methods for setting up DI have also changed again.

      I might be wrong here and my memory blurred, but stuff like this never happend in “old” .NET Framework. Its releases were less often but more thought through and therefore needed less breaking changes. If someone asked me today “Hey mate, could you make this Win Forms App listen and react to http requests on a certain port?” I’m absolutely certain I could make this happen in a jiffy because I know things and concepts haven’t been swirled around since the last time I used WCF. If I took a sabbatical year today I bet I’d be completely lost when trying to get back on board with .NET Core/5+.

      tl,dr:
      I like .NET Core/5+, but I deeply miss the stability, the steadiness and quite frankly the “Omg what an absolutly awesome new milestone is this???!!” feeling I got back when Microsoft introduced its key players in old .NET Framework. Today my reaction is more like “Ok nice, but what happened to so and so? And why did they change xyz again? How do I achieve this thing now?
      . My enthusiasm has waned.

      • Θοδωρής Τσιρπάνης 2

        I have literally lost count how many times application builder logic […] has been changed between .NET Core/5+ versions

        It has not changed, just new alternatives are provided and the defaults have changed when you create a new project. In your existing codebase, you are always free to keep using an earlier pattern in your app when you update the .NET version, if you find no benefit in changing it.

        • Karsten Krug 2

          With “changed” I didn’t mean to imply “not compiling” but – like you said – ever changing defaults and new, redundant alternatives to achieve almost the same thing which causes lots of confusion not only among developers working on the same code but also to the same single developer who sees his code still compiling but repeatedly being sidelined by some fancy new pattern.

          Why is that? What is the benefit? Why didn’t this happen – at least to my recollection – in old .NET Framework that frequently? What is the advantage of adding competing ways of achieving the same result with each .NET Version? Why should I – as you suggested – stick to an earlier pattern when this leads to my code diverging from defaults, best practices and understanding of coworkers? To me it’s just annoying and consumes time I’d rather prefer spending on the actual work I was tasked with.

          • Wojciech Jakubowski 1

            Agree 100%.

            It feels a bit like as if someone had way too much time in their hands.

          • Pablo Pioli 4

            Because the work of other people is not the same as your work. Different needs that weren’t resolved before. Like containers, single file deployment, etc.

          • Jorge Morales Vidal 0

            You can continue working with your current tools, however, expect no change or innovations in .NET Framework. It looks like nostalgia has been keeping you safe in your bubble and that’s ok. New .NET versions are meant for workloads that are not yours. The modern times require faster innovation, and .NET 5, 6, 7 and beyond offer that. If you need a tranquil, peaceful, stable, and calm environment, stay in .NET Framework. But as I said, don’t expect new shiny stuff in there. .NET Framework will remain in Windows for so many years in the future.

          • Karsten Krug 0

            Hi Jorge, my “current tools” are in fact .NET 6/7, 100% of my new projects use this workload and I actively advocate using it within the company I work for, recently did a presentation about the benefits of using .NET CLI and SDK-Style project files. I even transitioned to .NET 6 when I needed an internal WinForms App because I was curious about the state of the new outproc designer.

            I’m not in a “bubble” nor does nostalgia hold me back in any way. I’m 50, but most definitly not in the “everything was better back in the day and now I’m left behind” category.
            My whole intention was to point out that I wished Microsoft would stick to (what I perceived) .NET Framework style of stable, steady and fully thought out concepts and patterns that devs can rely on to last and not be frequently superseded by some shiny fancy rework.
            I was pretty much praising Microsoft achievements with .NET Framework and – based on that premise – trying to express some constructive criticism. Please don’t think of me as some grumpy old lazy dev (well maybe old, I’ll give you that ^^)

    • Thomas Levesque 6

      I don’t know what world you live in, but apparently not the same as me. In the last ~6 years, I almost exclusively worked on projects targeting at least the latest LTS version, in several very different companies. All around me I see people creating new apps with .NET 6 or 7. And most of the legacy projects I see still targeting .NET Framework are in the process of migrating to .NET 6 or later.

    • MJ 2

      That is simply not true, I know several large enterprises (our own included), that are pretty much exclusively on .NET5+(Most are even 6+) for even really big code bases.

  • Patrick Smacchia 9

    I am working full time with .NET since 2002, I consider myself lucky to have bet on the most supported platform ever, to not have to change platform and migrate legacy every few years and not have to deal with awkward trade-off.

  • Jeff Jones 1

    Let’s extend this concept to Visual Studio 2022 (.NET’s best friend) and above.

    When creating an item (class, controller, WebAPI, gRPC, etc., let’s have lots more templates to choose from for more specific uses. And while we are at it, a simpler way to add our own templates. And how about more snippets for speeding up Blazor HTML/CSS and MAUI HTML manual creation of pages (since there is no UI designer in the near future)?

    That would be a productivity multiplier that should be easy for MS to create in extensions for VS 2022 and above. Then integrate that into Intellisense.

    • Michael Taylor 1

      As for templates it is pretty easy to create project templates. What we do is create a project configured the way we want and then use Export Template to generate the template. It is automatically added to the appropriate location so you can begin using it immediately. Since it is a zip file you can easily edit the files and project file if changes are needed later.

      For item templates it is just dropping the right file into the template directory. Again the Export Template option can do this.

  • Steve Naidamast 2

    I have been working with .NET since it was commercially released in 2001.

    I have always enjoyed working with the development environments but I have always believed that moving to .NET Core was a major mistake and as some people here have commented, it has shown to have produced too many issues for many to simply convert over to the new frameworks.

    The original frameworks were a compendium of many major foundations, many of which have simply been removed in the Core Frameworks. This has become typical of Microsoft development efforts; throw the baby out with the bathwater. So we know longer have WCF, VB.NET, ASP.NET WebForms, Workflow Foundation, Silverlight, SQL Server CE (an excellent embedded database engine, replaced with the useless LocalDB), and a host of other development tools thatmany came to rely on.

    The biggest mistake and the start of it all was the deprecation of ASP.NET WebForms, still in my view, the zenith of all web development environments. True it had issues but as one Silicon Valley engineer demonstrated a number of years ago, in terms of performance, this was an issue with configuration and hardware, not the software.

    Microsoft was making good moves to refine this environment and was doing a credible job. However, the web “purists” got it into their heads that there was only one real way to develop web applications with .NET and that was to adopt the emerging MVC paradigm. And yet, this paradigm had been around for several years before Microsoft produced its own version in 2010 with the freely available Castle Project’s, “MonoRails” system. In fact, in the early days of Microsoft’s ASP.NET MVC, the project structure was the exact same as that of MonoRails. Despite this, until ASP.NET MVC, no developer really had any interest in this paradigm.

    And yet Microsoft pushed this paradigm until we have what we have today, a nightmarish quagmire of development incoherence that goes against every standard of quality n-tier application development by breaking everything down into units of complexity that many are having serious issues dealing with.

    Microsoft’s technique of maturing development is simply to jolt the entire community into trauma as they continually take established frameworks and environments and throw them out the door for their new “roadmap” for the future. As if the future has shown us that anything has substantially changed when it comes to developing quality applications.

    Instead of touting all the new capabilities of .NET Core, many of which are actually questionable, Microsoft should start thinking about the longevity of their products without the necessity of traumatizing its most important asset, its Development Community every time they want to release a new technology.

    What Microsoft has done is create a foundation of quicksand that many can no longer easily adopt to and are now looking for alternatives. Even Microsoft has admitted that their existing technologies cannot easily be moved to the new Core Frameworks. So talk about all the convenience of APIs is rather shortsighted when the daily lives of many developers are caught in the constant crossfire of maintaining existing .NET Framework applications and converting others to the .NET Core Frameworks.

    There was nothing wrong with the original frameworks and this is why so many developers are staying with them; some out of choice, other out of necessity.

    Maybe Microsoft should start re-evaluating these original frameworks to see how they can be maintained, refined, and enhanced, instead of wasting so much time with developing their new toys, a lot of which is merely redundancy for things that were accomplished in the original implementations.

    If they cannot do this because of the nature of of these original frameworks were developed, this is an indication of how poorly thought out the internals of these frameworks were. And there is nothing to suggest then that the newer frameworks will have any better planning for their internals for the future, indicating that at some point Microsoft will again bring the roof crashing down on the Development Community…

    Steve Naidamast
    Sr. Software Engineer

    • Richard LanderMicrosoft employee 7

      > I have been working with .NET since it was commercially released in 2001.

      Glad to hear it!

      I sympathize with the sentiment that all the effort is going into the .NET Core family and not .NET Framework. It’s true. We continue to support .NET Framework in real and important ways, while paving a new (cross-platform) path with .NET Core. For most developers (that we talk to), this approach has been welcomed, and has enabled their C# code and skills to go to new places.

      Here’s a great example of a relatively recent change we made to support .NET Framework apps in containers: https://github.com/microsoft/dotnet-framework-docker/discussions/935. Customers told us that this was an important change to enable Web Forms apps (among others) in containers. We did that.

      If you look at all dev stacks, there are generations of libraries and frameworks. We learn, adapt, and make new things. The only difference with .NET is that we support the old technologies for FAR longer.

      There was nothing wrong with the original frameworks and this is why so many developers are staying with them; some out of choice, other out of necessity.

      Completely agree!

      If they cannot do this because of the nature of of these original frameworks were developed, this is an indication of how poorly thought out the internals of these frameworks were.

      This is untrue and unfair. We have much better underlying infrastructure in .NET 8 than we had in .NET Framework 4.0, for example. We’re able to take advantage of that in higher-level frameworks, like ASP.NET Core, and you see that in performance numbers. We didn’t have these fancy things 10-15 years ago. Also (and this is the big one), the compatibility burden we have with .NET Framework prevents us from applying those fancy things there. This is why we have side-by-side installation and self-contained deployments with .NET Core. It also would have been impossible (in part because of compatibility) to port .NET Framework to Linux.

      The team has made incredible progress since we shipped .NET Core 1.0. The recent .NET 8 Performance post is a good example of that.

      https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/

      • Alexey Gvozdikov 0

        You already break compatibility from 2.0 to 3.5 and then from 3.5 to 4.0; So what prevented you from “improving” .NET 4.0 to be ideal?! You TWICE broke compatibility! Third one doesn’t matter. :)) But instead you made “.NET Core” – your biggest mistake. You even cannot imagine how pathetic you look from aside, trying to squeeze quality from ugly Core. Forget it, concentrate on 4.0 and make it better.

        • Mystery Man 5

          You are not correct about the compatibility between 2.0 and 3.5.

          .NET Framework 3.5 only adds WPF and WCF to .NET Framework 2.0 without adding its own CLR. In other words, a .NET Framework 2.0 app runs on a system that has 3.5 SP1 installed.

    • Arad Alvand 6

      Really wish devblogs comments had a dislike button.

    • MJ 1

      Many of the changes Microsoft has made are because of how the whole field of web development, has been changing extremely rapidly, if they had stuck with Web Form, their whole web framework would essentially be dead for a lot of us.

      Web Forms was the natural choice at the time, after moving over from Classic ASP, but not long after, most of the competing technologies, were using MVC, so it made sense Microsoft moved in that direction, when it aligned with their goal of releasing .NET and ASP.NET as open source, and they had to rewrite things anyway. Since then SPA applications has exploded, with Javascript frameworks such as React, Angular, and others, which made way for Web APIs, and Blazor as a somewhat “comparable” MS offering.

      I don’t think you can speak on behalf of everyone, I for one welcomed MVC with open hands when it was released, but have since moved over to SPA (React) / Web APIs in ASP.NET Core, things just change rapidly, esp. around cloud-native technologies, that is just the name of the game, you have to adopt, or you are left stranded on old unsupported things, while everyone around are zipping past working how most people work these days. Personally I like the pace, but I also spend a lot of time keeping on new changes.

      Even for classic Desktop applications, we are seeing a shift from Native components (like WinForms -> WPF to make more graphically nice interfaces -> Just using an essential web view/renderer (Teams/Slack/VS Code), for the UI esp. if it’s cross-platform, so even that field is not stale.

  • Alexey Gvozdikov 2

    I was (and still) happy w .NET Framework (up to 4.8), but when MS introduced “Core”, it become MESS of bugs, incompatibilities and “questionable” decisions. Not saying .NET Core (despite wide announces “we’re multiplatform”) HAS NO GUI! Pathetic MAUI & Co – just parody on WPF and still full of bugs and lack of functionality.
    ATM MS waste tons of resources to be “multiplatform”, but all these efforts gone to the sand. You failed in multiplatform! So STOP IT, don’t waste resources anymore, return back to .NET FW and improve it – there is many things to do.
    Young boys – among ’em is cool to hate MS and be “Linuxoid”, but seems MS also became dumber with all your young blood! People lost respect to STABILITY and QUALITY. That’s why newly written “Core” lacks in all business aspects and that’s why we still work in .NET FW and even don’t plan to use Core.
    MS must stop and say open way: “sorry, guys, we f**** up all job”. But you still hit the wall with forehead in pathetic tries to show how “modern” Core is. IT IS NOT. Buggy, clumsy “Core” must be thrown away with its young authors. .NET FW – this is your PRODUCT.

    • Kenneth Hoff 0

      While there definitely are some things that .Net Framework does better than .Net Core, most things work just as well if not better – for most people. The cross-platform story however I can somewhat agree with – developing for .Net is still only “technically working” on non-Windows platforms (The runtime however is fully cross-platform)

    • Arad Alvand 6

      Lmao you don’t seem well-adjusted at all.

    • MJ 0

      They did not fail outside of cross-platform Desktop applications (to a degree), but even then you are not stranded, there are non-MS options.

      These days literally 100% of all the C# I write is running on Linux in production (containers, cmdline utils etc) but developed on Windows/(Mac to a lesser degree), that would not have been possible with the old 4.x .NET Framework, had MS not gone the cross-platform route, I would no longer have been a C# dev today or still a Windows user for that matter, though I still dabble with Rust and Go from time to time.

      Have not had any special stability issues, or incompatibility issues on the .NET Core side, I had plenty on 4.x .NET Framework.

    • Tore Lønne Senneseth 0

      I think we can agree that the cross platform story for native apps in .NET has some challenges. However, for backend / web, .NET is probably one of the best platforms out there in terms of both performance and capabilities.
      My experience with .NET as a backend / service technology has been unconditionally positive. We’re currently running a pretty big setup on both Windows and Linux (docker + kubernetes) using the exact same codebase, and have absolutely no cross platform issues.
      When it comes to the claim that “That’s why newly written “Core” lacks in all business aspects”, I don’t understand what you are referring to at all. In my experience, .NET 5+ is way better than .NET Framework in all aspects.

  • Huo Yaoyuan 1

    To be fair, I think what we need most is a shared, MSIX-based distribution of .NET.
    A versionable system component is definitely very valuable. The framework-dependent shared installer works, but it’s far from ideal.

  • Gauthier M. 2

    Having to install the correct version of the .Net core runtime or delivering it with the executable is a real obstacle to the adoption of .Net. I had to integrate into my update program an embed version of the .Net Core (which represents 200Mb!) and an automated downloader/installer of the .Net SDK of the desired version on the client’s computer. It’s very disabling.

    I think that when you launch a .Net app, Windows should silently install the correct version of .Net if it is not already installed on the machine. This would relieve developers and users. In addition, this would avoid the nonsense of having to deliver programs with 200Mb of runtime possibly already present on another program present on the machine.

    As a user, I can no longer stand having to download programs that have switched to .Net Core and which overnight have gone from a few Mb to more than 250Mb because they take the runtime with them.

  • Kelly Brown 28

    Many of these comments are… just unreal. I feel obligated to step in and hopefully speak up for many devs in the shadows who are enjoying this golden age of .NET.

    .NET Core was the exact opposite of a “mistake”. It was the best thing to ever happen to the ecosystem. Cross-platform support is critical. The ability to break things to improve them across major versions has allowed .NET to become truly magnificent. The ability to ship self-contained applications is a game-changer. (Yes, the builds are bloated, but progress is being made.) Span is a game changer. Static interfaces and default interface implementations are game changers. Heck even VS Code is a game changer.

    C# is the best language there is. Period. It lets programmers fly high and swoop low. It empowers lib writers to further empower other devs.

    • Redd Tsunekichi 0

      I have real respect that the developers still attempt to engage with some of these comments. Few other development communities get this kind of direct interaction with lead developers of such a massive product (even in such an obscure place like a devblog comment section). I love the language, tooling and support community they have built up and is a reason I will continue to use this product. The innovations with newer dotnet versions have single-handedly saved the ecosystem as a whole.

  • Kirino Kousaka 25

    It seems this thread has many old-timers struggling to adapt to the modern world.

    Saying “Microsoft failed at multiplatform” is a joke. Backend development in the old .NET framework was practically non-existent. People would get headaches just thinking about the unpredictable IIS or issues with Windows Server. .NET was mainly used on the backend by Microsoft-affiliated teams; it wasn’t the go-to choice for those in their right mind.

    Enter .NET Core. This changed everything. Now, modern .NET is quick, competitive, and stands shoulder to shoulder with Java, Python, Go, and any other Linux-compatible language. When it comes to the language itself, it’s often superior to its competitors, though it might be a bit behind in terms of libraries and the open-source community. With modern trends like opentelemetry, .NET is at the forefront.

    Transitioning from the .NET framework to .NET Core was the best decision ever, and it wouldn’t have been possible without a complete rewrite.

    Yes, .NET isn’t without its issues. Take “MAUI”, which shouldn’t have been even introduced. Desktop development isn’t where the money is currently, and motivation is lacking. It’s a pity that projects like Avalonia, which are genuinely beautiful, don’t get the backing they deserve. If Microsoft had funneled the MAUI funds into Avalonia, the “UI problem” in .NET could’ve been solved once and for all. But greed often prevails.

    Nevertheless, .NET Core is a remarkable achievement and doesn’t warrant the criticism it’s receiving from many of you. Each update only makes it better and more mature.

    • Wojciech Jakubowski 8

      I agree – the amount people whining and remembering the “good old days .net framework” quite high. It is my major concern that .NET is becoming a technology for “very experienced” devs and its really lacking the ability to attract young talent – university grads, startupers etc. and get people excited. These folks mostly go to python, js, go, even java. There are many reasons behind that – that is a topic for another discussion.

      But this is especially ironic, given that .NET and C# teams try to innovate and add features to the language at absurd pace that is really hard to keep up with for most people. I think its probably because they want to keep stable teams for these products and they need to give these folks some fulfilling work and goals to deliver each year.

      The technical people behind technology in C#/.NET and teams seem competent and they deliver.
      But it feels like its the corporation with its all aggressive nature that is dragging them down. Thats why .net is where it is and where it will remain – an enterprise first framework for older folds working for big corps. Mostly on Windows.

      • Mystery Man 2

        Well, both of you have fine points. I think I agree with 95% of everything you both said.

    • W L 2

      The desktop has declined. The web is constantly conquering the city, it’s easy, more potential employees and faster development speed. Those who need high performance will stick to native or Qt. Those who have historical projects to maintain still need WPF and Winform. Even for new desktop projects, small tools can be developed rapidly using Winform. For larger projects or those that have requirements for UI aesthetics, reference materials and maturity of WPF is also far better than MAUI.

      So in my opinion, MAUI is just a project created by a small number of people for promotion(We all know that mature projects are less likely to achieve impressive results). Abandoning historical baggage is not always a bad thing, as far as MAUI is concerned, it will not solve any practical problems in the foreseeable future: cross-platform of MAUI is totally a joke, especially for linuxers.

    • Taylor Fox 2

      I personally don’t have an interest to build web apps in .NET. I know Express & friends much better than I know ASP, and that’s where the ecosystem & talent currently lies much more than ASPNETCore.

      BUT on the desktop, .NET has gone from being the best way to build quick little apps with Winforms, and WPF if you want something that can actually be a product to being the same winforms and the same WPF we was building with 10+ years ago.

      MAUI seems to be mobile first, and doesn’t support all of the major operating systems.
      Avalonia, seems to be slow and worse than Electron in the efficiency front (especially on macOS).
      Winforms on non-Windows platforms (courtesy of Mono) died many many years ago.
      Building seperate native apps (i.e. WPF/WinForms for Windows, Xamarin.Mac on Mac, and apparently nothing for Linux) is a chore.

      Plus, creating these cross-platform apps is less cross platform. You need Windows & Visual Studio. If you’re on a Mac, Microsoft’s advice is to get a PC. (not like VSfM actually worked when it was supported). If you’re on a Linux box, Microsoft’s advice is to get a Windows box.

      I say all this less out of spite or distain for the .NET ecosystem or especially the people that work on it, and more frustration that I learnt programming with Visual Basic 2010, and still know C# better than many other languages, and over those 12 years, its gone from being a dream and a system that, for the most part, just worked, to a fractured, seemingly directionless nightmare where spend hours trying to figure out why a fresh project wont build, and if it does why it immediately crashes.

      Note that I’m not trying to say .NET Core is bad, and we should go back to Framework. God no. But I just wanna write my desktop apps.

Feedback usabilla icon