What is .NET, and why should you choose it?

.NET Team

.NET has changed a lot since we kicked off the fast-moving .NET open-source and cross-platform project. We’ve re-thought and refined the platform, adding new low-level capabilities designed for performance and safety, paired with higher-level productivity-focused features. Span<T>, hardware intrinsics, and nullable reference types are examples. We’re kicking off a new “.NET Design Point” blog series to explore the fundamentals and design choices that define today’s .NET platform, and how they benefit the code you are writing now.

This first post in the series provides a breadth overview of the pillars and the design-point of the platform. It describes “what you get” at a foundational level when you choose .NET and is intended to be a sufficient and facts-focused framing that you can use to describe the platform to others. Subsequent posts will go into more detail on these same topics since this post doesn’t quite do any of these features justice. This post doesn’t describe tools, like Visual Studio, nor does it cover higher-level libraries and application models like those provided by ASP.NET.

Follow up posts:

Before getting into the details, it is worth talking about .NET usage. It is used by millions of developers, to create cloud, client, and other apps on multiple operating systems and chip architectures. It is also run in some well-known places, like Azure, StackOverflow, and Unity. It is common to find .NET used in companies of all sizes, but particularly larger ones. In many places, it is a good technology to know to get a job.

.NET design point

The .NET platform stands for Productivity, Performance, Security, and Reliability. The balance .NET strikes between these values is what makes it attractive.

The .NET design point can be boiled down to being effective and efficient in both the safe domain (where everything is productive) and in the unsafe domain (where tremendous functionality exists). .NET is perhaps the managed environment with the most built-in functionality, while also offering the lowest cost to interop with the outside world, with no tradeoff between the two. In fact, many features exploit this seamless divide, building safe managed APIs on the raw power and capability of the underlying OS and CPU.

We can expand on the design point a bit more:

  • Productivity is full-stack with runtime, libraries, language, and tools all contributing to developer user experience.
  • Safe code is the primary compute model, while unsafe code enables additional manual optimizations.
  • Static and dynamic code are both supported, enabling a broad set of distinct scenarios.
  • Native code interop and hardware intrinsics are low cost and high-fidelity (raw API and instruction access).
  • Code is portable across platforms (OS, chip architecture), while platform targeting enables specialization and optimization.
  • Adaptability across programming domains (cloud, client, gaming) is enabled with specialized implementations of the general-purpose programming model.
  • Industry standards like OpenTelemetry and gRPC are favored over bespoke solutions.

The pillars of the .NET Stack

The runtime, libraries, and languages are the pillars of the .NET stack. Higher-level components, like .NET tools and app stacks like ASP.NET Core, build on top of these pillars. The pillars have a symbiotic relationship, having been designed and built together by a single group (Microsoft employees and the open source community), where individuals work on and inform multiple of these components.

C# is object-oriented and the runtime supports object orientation. C# requires garbage collection and the runtime provides a tracing garbage collector. In fact, it would be impossible to port C# (in its complete form) to a system without garbage collection. The libraries (and also the app stacks) shape those capabilities into concepts and object models that enable developers to productively write algorithms in intuitive workflows.

C# is a modern, safe, and general-purpose programming language that spans from high-level features such as data-oriented records to low-level features such as function pointers. It offers static typing and type- and memory-safety as baseline capabilities, which simultaneously improves developer productivity and code safety. The C# compiler is also extensible, supporting a plug-in model that enables developers to augment the system with additional diagnostics and compile-time code generation.

A number of C# features have influenced or were influenced by state of the art programming languages. For example, C# was the first mainstream language to introduce async and await. At the same time, C# borrows concepts first introduced in other programming languages, for example by adopting functional approaches such as pattern matching and primary constructors.

The core libraries expose thousands of types, many of which integrate with and fuel the C# language. For example, C#’s foreach enables enumerating arbitrary collections, with pattern-based optimizations that enable collections like List<T> to be processed simply and efficiently. Resource management may be left up to garbage collection, but prompt cleanup is possible via IDisposable and direct language support in using.

String interpolation in C# is both expressive and efficient, integrated with and powered by implementations across core library types like string, StringBuilder, and Span<T>. And language-integrated query (LINQ) features are powered by hundreds of sequence-processing routines in the libraries, like Where, Select, and GroupBy, with an extensible design and implementations that support both in-memory and remote data sources. The list goes on, and what’s integrated into the language directly only scratches the surface of the functionality exposed as part of the core .NET libraries, from compression to cryptography to regular expressions. A comprehensive networking stack is a domain of its own, spanning from sockets to HTTP/3. Similarly, the libraries support processing a myriad of formats and languages like JSON, XML, and tar.

The .NET runtime was initially referred to as the “Common Language Runtime (CLR)”. It continues to support multiple languages, some maintained by Microsoft (e.g. C#, F#, Visual Basic, C++/CLI, and PowerShell) and some by other organizations (e.g. Cobol, Java, PHP, Python, Scheme). Many improvements are language-agnostic, which raises all boats.

Next, we’re going to look at the various platform characteristics that they deliver together. We could detail each of these components separately, but you’ll soon see that they cooperate in delivering on the .NET design point. Let’s start with the type system.

Type system

The .NET type system offers significant breadth, catering somewhat equally to safety, descriptiveness, dynamism, and native interop.

First and foremost, the type system enables an object-oriented programming model. It includes types, (single base class) inheritance, interfaces (including default method implementations), and virtual method dispatch to provide a sensible behavior for all the type layering that object orientation allows.

Generics are a pervasive feature that allow specializing classes to one or more types. For example, List<T> is an open generic class, while instantiations like List<string> and List<int> avoid the need for separate ListOfString and ListOfInt classes or relying on object and casting as was the case with ArrayList. Generics also enable creating useful systems across disparate types (and reducing the need for a lot of code), like with Generic Math.

Delegates and lambdas enable passing methods as data, which makes it easy to integrate external code within a flow of operations owned by another system. They are a kind of “glue code” and their signatures are often generic to allow broad utility.

app.MapGet("/Product/{id}", async (int id) =>
{
    if (await IsProductIdValid(id))
    {
        return await GetProductDetails(id);
    }

    return Products.InvalidProduct;
});

This use of lambdas is part of ASP.NET Core Minimal APIs. It enables providing an endpoint implementation directly to the routing system. In more recent versions, ASP.NET Core makes more extensive use of the type system.

Value types and stack-allocated memory blocks offer more direct, low-level control over data and native platform interop, in contrast to .NET’s GC-managed types. Most of the primitive types in .NET, like integer types, are value types, and users can define their own types with similar semantics.

Value types are fully supported through .NET’s generics system, meaning that generic types like List<T> can provide flat, no-overhead memory representations of value type collections. In addition, .NET generics provide specialized compiled code when value types are substituted, meaning that those generic code paths can avoid expensive GC overhead.

byte magicSequence = 0b1000_0001;
Span<byte> data = stackalloc byte[128];
DuplicateSequence(data[0..4], magicSequence);

This code results in stack-allocated values. The Span<byte> is a safe and richer version of what would otherwise be a byte*, providing a length value (with bounds checking) and convenient span slicing.

Ref types and variables are a sort of mini-programming model that offers lower-level and lighter-weight abstractions over type system data. This includes Span<T>. This programming model is not general purpose, including significant restrictions to maintain safety.

internal readonly ref T _reference;

This use of ref results in copying a pointer to the underlying storage rather than copying the data referenced by that pointer. Value types are “copy by value” by default. ref provides a “copy by reference” behavior, which can provide significant performance benefits.

Automatic memory management

The .NET runtime provides automatic memory management via a garbage collector (GC). For any language, its memory management model is likely its most defining characteristic. This is true for .NET languages.

Heap corruption bugs are notoriously hard to debug. It’s not uncommon that engineers spend many weeks if not months tracking these down. Many languages use a garbage collector as a user friendly way of eliminating these bugs because the GC ensures correct object lifetimes. Typically, GCs free memory in batches to operate efficiently. This incurs pauses that may not be suitable if you have very tight latency requirements, and the memory usage would be higher. GCs tend to have better memory locality and some are capable of compacting the heap making it less prone to memory fragmentation.

.NET has a self-tuning, tracing GC. It aims to deliver “hands off” operation in the general case while offering configuration options for more extreme workloads. The GC is the result of many years of investment, improving and learning from many kinds of workloads.

Bump pointer allocation — objects are allocated by incrementing an allocation pointer by the size needed (instead of finding space in segregated free blocks) so those allocated together tend to stay together. And since they are often accessed together this enables better memory locality which is important for performance.

Generational collections — it’s extremely common that object lifetimes follow the generational hypothesis, that an object either lives for very long or dies very quickly. So it’s much more efficient for a GC to only collect memory occupied by ephemeral objects most of time it runs (called ephemeral GCs), instead of having to collect the whole heap (called full GCs) every time it runs.

Compaction — the same amount of free space in larger and fewer chunks is more useful than in smaller and more chunks. During a compacting GC, survived objects are moved together so larger free spaces can be formed. This is harder to implement than a non-moving GC as it will need to update references to these moved objects. The .NET GC is dynamically tuned to perform compaction only when it determines the reclaimed memory is worth the GC cost. This means ephemeral collections are often compacting.

Parallel — GC work can run on a single thread or on multiple threads. The Workstation flavor does GC work on a single thread while the Server flavor does it on multiple GC threads so that it can finish much faster. The Server GC can also accommodate a larger allocation rate as there are multiple heaps the application can allocate on instead of only one, so it’s very good for throughput.

Concurrent — doing GC work while user threads are paused — called Stop-The-World — makes the implementation simpler but the length of these pauses may be unacceptable. .NET offers a concurrent flavor to mitigate that issue.

Pinning — the .NET GC supports object pinning, which enables zero-copy interop with native code. This capability enables high-performance and high-fidelity native interop, with limited overhead for the GC.

Standalone GC — a standalone GC with a different implementation can be used (specified via config and satisfying interface requirements). This makes investigations and trying out new features much easier.

Diagnostics — The GC provides rich information about memory and collections, structured in a way that allows you to correlate data with the rest of the system. For example, you can evaluate the GC impact of your tail latency by capturing GC events and correlating them with other events like IO to calculate how much GC is contributing vs other factors, so you can direct your efforts to the right components.

Safety

Programming safety has been one of the top topics of the last decade. It is an inherent component of a managed environment like .NET.

Forms of safety:

  • Type safety — An arbitrary type cannot be used in place of another, avoiding undefined behavior.
  • Memory safety — Only allocated memory is ever used, for example a variable either references a live object or is null.
  • Concurrency or thread safety — Shared data cannot be accessed in a way that would result in undefined behavior.

Note: The US Federal government recently published guidance on the importance of memory safety.

.NET was designed as a safe platform from its initial design. In particular, it was intended to enable a new generation of web servers, which inherently need to accept untrusted input in the world’s most hostile computing environment (the Internet). It is now generally accepted that web programs should be written in safe languages.

Type safety is enforced by a combination of the language and the runtime. The compiler validates static invariants, such as assigning unlike types — for example, assigning string to Stream — which will produce compiler errors. The runtime validates dynamic invariants, such as casting between unlike types, which will produce an InvalidCastException.

Memory safety is provided largely by cooperation between a code generator (like a JIT) and a garbage collector. Variables either reference live objects, are null, or are out of scope. Memory is auto-initialized by default such that new objects do not use uninitialized memory. Bounds checking ensures that accessing an element with an invalid index will not allow reading undefined memory — often caused by off-by-one errors — but instead will result in a IndexOutOfRangeException.

null handling is a specific form of memory safety. Nullable reference types is a C# language and compiler feature that statically identifies code that is not safely handling null. In particular, the compiler warns you if you dereference a variable that might be null. You can also disallow null assignment so the compiler warns you if you assign a variable from a value that might be null. The runtime has a matching dynamic validation feature that prevents null references from being accessed, by throwing NullReferenceException.

This feature relies on nullable attributes in the library. It also relies on their exhaustive application within the libraries and app stacks such that user code can be provided with accurate results from static analysis tools.

string? SomeMethod() => null;
string value = SomeMethod() ?? "default string";

This code is considered null-safe by the C# compiler since null use is declared and handled, in part by ??, the null coalescing operator. The value variable will always be non-null, matching its declaration.

There is no built-in concurrency safety in .NET. Instead, developers need to follow patterns and conventions to avoid undefined behavior. There are also analyzers and other tools in the .NET ecosystem that provide insight into concurrency issues. And the core libraries include a multitude of types and methods that are safe to be used concurrently, for example concurrent collections that support any number of concurrent readers and writers without risking data structure corruption.

The runtime exposes safe and unsafe code models. Safety is guaranteed for safe code, which is the default, while developers must opt-in to using unsafe code. Unsafe code is typically used to interop with the underlying platform, interact with hardware, or to implement manual optimizations for performance critical paths.

A sandbox is a special form of safety that provides isolation and restricts access between components. We rely on standard isolation technologies, like processes (and CGroups), virtual machines, and Wasm (with their varying characteristics).

Error handling

Exceptions are the primary error handling model in .NET. Exceptions have the benefit that error information does not need to be represented in method signatures or handled by every method.

The following code demonstrates a typical pattern:

try
{
    var lines = await File.ReadAllLinesAsync(file);
    Console.WriteLine($"The {file} has {lines.Length} lines.");
}
catch (Exception e) when (e is FileNotFoundException or DirectoryNotFoundException)
{
    Console.WriteLine($"{file} doesn't exist.");
}

Proper exception handling is essential for application reliability. Expected exceptions can be intentionally handled in user code, otherwise an app will crash. A crashed app is more reliable and diagnosable than an app with undefined behavior.

Exceptions are thrown from the point of an error and automatically collect additional diagnostic information about the state of the program that is used with interactive debugging, application observability, and post-mortem debugging. Each of these diagnostic approaches rely on having access to rich error information and application state to diagnose problems.

Exceptions are intended for rare situations. This is, in part, because they have a relatively high performance cost. They are not intended to be used for control flow, even though they are sometimes used that way.

Exceptions are used (in part) for cancellation. They enable efficiently halting execution and unwinding a callstack that had work in progress once a cancellation request is observed.

try 
{ 
    await source.CopyToAsync(destination, cancellationToken); 
} 
catch (OperationCanceledException) 
{ 
    Console.WriteLine("Operation was canceled"); 
}

.NET design patterns include alternative forms of error handling for situations when the performance cost of exceptions is prohibitive. For example, int.TryParse returns a bool, with an out parameter containing the parsed valid integer upon success. Dictionary<TKey, TValue>.TryGetValue offers a similar model, returning a valid TValue type as an out parameter in the true case.

Error handling, and diagnostics more generally, is implemented via low-level runtime APIs, higher-level libraries, and tools. These capabilities have been designed to support newer deployment options like containers. For example, dotnet-monitor can egress runtime data from an app to a listener via a built-in diagnostic-oriented web server.

Concurrency

Support for doing multiple things at the same time is fundamental to practically all workloads, whether it be client applications doing background processing while keeping the UI responsive, services handling thousands upon thousands of simultaneous requests, devices responding to a multitude of simultaneous stimuli, or high-powered machines parallelizing the processing of compute-intensive operations. Operating systems provide support for such concurrency via threads, which enable multiple streams of instructions to be processed independently, with the operating system managing the execution of those threads on any available processor cores in the machine. Operating systems also provide support for doing I/O, with mechanisms provided for enabling I/O to be performed in a scalable manner with many I/O operations “in flight” at any particular time. Programming languages and frameworks can then provide various levels of abstraction on top of this core support.

.NET provides such concurrency and parallelization support at multiple levels of abstraction, both via libraries and deeply integrated into C#. A Thread class sits at the bottom of the hierarchy and represents an operating system thread, enabling developers to create new threads and subsequently join with them. ThreadPool sits on top of threads, allowing developers to think in terms of work items that are scheduled asynchronously to run on a pool of threads, with the management of those threads (including the addition and removal of threads from the pool, and the assignment of work items to those threads) left up to the runtime. Task then provides a unified representation for any operations performed asynchronously and that can be created and joined with in multiple ways; for example, Task.Run allows for scheduling a delegate to run on the ThreadPool and returns a Task to represent the eventual completion of that work, while Socket.ReceiveAsync returns a Task<int> (or ValueTask<int>) that represents the eventual completion of the asynchronous I/O to read pending or future data from a Socket. A vast array of synchronization primitives are provided for coordinating activities synchronously and asynchronously between threads and asynchronous operations, and a multitude of higher-level APIs are provided to ease the implementation of common concurrency patterns, e.g. Parallel.ForEach and Parallel.ForEachAsync make it easier to process all elements of a data sequence in parallel.

Asynchronous programming support is also a first-class feature of the C# programming language, which provides the async and await keywords that make it easy to write and compose asynchronous operations while still enjoying the full benefits of all the control flow constructs the language has to offer.

Reflection

Reflection is a “programs as data” paradigm, allowing one part of a program to dynamically query and/or invoke another, in terms of assemblies, types and members. It is particularly useful for late-bound programming models and tools.

The following code uses reflection to find and invoke types.

foreach (Type type in typeof(Program).Assembly.DefinedTypes)
{
    if (type.IsAssignableTo(typeof(IStory)) &&
        !type.IsInterface)
    {
        IStory? story = (IStory?)Activator.CreateInstance(type);
        if (story is not null)
        {
            var text = story.TellMeAStory();
            Console.WriteLine(text);
        }
    }
}

interface IStory
{
    string TellMeAStory();
}

class BedTimeStore : IStory
{
    public string TellMeAStory() => "Once upon a time, there was an orphan learning magic ...";
}

class HorrorStory : IStory
{
    public string TellMeAStory() => "On a dark and stormy night, I heard a strange voice in the cellar ...";
}

This code dynamically enumerates all of an assembly’s types that implement a specific interface, instantiates an instance of each type, and invokes a method on the object via that interface. The code could have been written statically instead, since it’s only querying for types in an assembly it’s referencing, but to do so it would need to be handed a collection of all of the instances to process, perhaps as a List<IStory>. This late-bound approach would be more likely to be used if this algorithm loaded arbitrary assemblies from an add-ins directory. Reflection is often used in scenarios like that, when assemblies and types are not known ahead of time.

Reflection is perhaps the most dynamic system offered in .NET. It is intended to enable developers to create their own binary code loaders and method dispatchers, with semantics that can match or diverge from static code policies (defined by the runtime). Reflection exposes a rich object model, which is straightforward to adopt for narrow use cases but requires a deeper understanding the .NET type system as scenarios get more complex.

Reflection also enables a separate mode where generated IL byte code can be JIT-compiled at runtime, sometimes used to replace a general algorithm with a specialized one. It is often used in serializers or object relational mappers once the object model and other details are known.

Compiled binary format

Apps and libraries are compiled to a standardized cross-platform bytecode in PE/COFF format. Binary distribution is foremost a performance feature. It enables apps to scale to larger and larger numbers of projects. Each library includes a database of imported and exported types, referred to as metadata, which serves a significant role for both development operations and for running the app.

Compiled binaries include two main aspects:

  • Binary bytecode — terse and regular format that skips the need to parse textual source after compilation by a high-level language compiler (like C#).
  • Metadata — describes imported and exported types, including the location of the byte code for a given method.

For development, tools can efficiently read metadata to determine the set of types exposed by a given library and which of those types implement certain interfaces, for example. This process makes compilation fast and enables IDEs and other tools to accurately present lists of types and members for a given context.

For runtime, metadata enables libraries to be loaded lazily, and method bodies even more so. Reflection (discussed later) is the runtime API for metadata and IL. There are other more appropriate APIs for tools.

The IL format has remained backwards-compatible over time. The latest .NET version can still load and execute binaries produced with .NET Framework 1.0 compilers.

Shared libraries are typically distributed via NuGet packages. NuGet packages, with a single binary, can work on any operating system and architecture, by default, but can also be specialized to provide specific behavior in specific environments.

Code generation

.NET bytecode is not a machine-executable format, but it needs to be made executable by some form of code generator. This can be achieved by ahead-of-time (AOT) compilation, just-in-time (JIT) compilation, interpretation, or transpilation. In fact, these are all used today in various scenarios.

.NET is most known for JIT compilation. JITs compile methods (and other members) to native code while the application is running and only as they are needed, hence the “just in time” name. For example, a program might only call one of several methods on a type at runtime. A JIT can also take advantage of information that is only available at runtime, like values of initialized readonly static variables or the exact CPU model that the program is running on, and can compile the same method multiple times in order to optimize each time for different goals and with learnings from previous compilations.

JITs produce code for a given operating system and chip architecture. .NET has JIT implementations that support, for example, Arm64 and x64 instruction sets, and Linux, macOS, and Windows operating systems. As a .NET developer, you don’t have to worry about the differences between CPU instruction sets and operating system calling conventions. The JIT takes care of producing the code that the CPU wants. It also knows how to produce fast code for each CPU, and OS and CPU vendors often help us do exactly that.

AOT is similar except that the code is generated before the program is run. Developers choose this option because it can significantly improve startup time by eliminating the work done by a JIT. AOT-built apps are inherently operating system and architecture specific, which means that extra steps are required to make an app run in multiple environments. For example, if you want to support Linux and Windows and Arm64 and x64, then you need to build four variants (to allow for all the combinations). AOT code can provide valuable optimizations, too, but not as many as the JIT in general.

We’ll cover interpretation and transpilation in a later post, however, they also play critical roles in our ecosystem.

One of the code-generator optimizations is intrinsics. Hardware intrinsics are an example where .NET APIs are directly translated into CPU instructions. This has been used pervasively throughout .NET libraries for SIMD instructions.

Interop

.NET has been explicitly designed for low-cost interop with native libraries. .NET programs and libraries can seamlessly call low-level operating system APIs or tap into the vast ecosystem of C/C++ libraries. The modern .NET runtime is focused on providing low-level interop building blocks such as the ability to call native methods via function pointers, exposing managed methods as unmanaged callbacks or customized interface casting. .NET is also continually evolving in this area and in .NET 7 released source generated solutions that further reduced overhead and were AOT friendly.

The following demonstrates the efficiency of C# functions pointers with the LibraryImport source generator introduced in .NET 7 (this source generator support layers on top of the DllImport support that’s existed since the beginning of .NET).

// Using a function pointer avoids a delegate allocation.
// Equivalent to `void (*fptr)(int) = &RegisterCallback;` in C
delegate* unmanaged<int, void> fptr = &RegisterCallback;
RegisterCallback(fptr);

[UnmanagedCallersOnly]
static void Callback(int a) => Console.WriteLine($"Callback:  {a}");

[LibraryImport("...", EntryPoint = "RegisterCallback")]
static partial void RegisterCallback(delegate* unmanaged<int, void> fptr);

Independent packages provide higher-level domain-specific interop solutions by taking advantage of these low-level building blocks, for example ClangSharp, Xamarin.iOS & Xamarin.Mac, CsWinRT, CsWin32 and DNNE.

These new features don’t mean built-in interop solutions like built-in runtime managed/unmanaged marshalling or Windows COM interop aren’t useful — we know they are and that people have come to rely upon them. Those features that have been historically built into the runtime continue to be supported in the .NET runtime. However, they are for backward compatibility only, with no plans to evolve them further. All future investments will be focused on the interop building blocks and in the domain-specific solutions that they enable.

Binary distributions

The .NET Team at Microsoft maintains several binary distributions, more recently supporting Android, iOS, and Web Assembly. The team uses a variety of techniques to specialize the codebase for each one of these environments. Most of the platform is written in C#, which enables porting to be focused on a relatively small set of components.

The community maintains another set of distributions, largely focused on Linux. For example,.NET is included in Alpine Linux, Fedora, Red Hat Enterprise Linux, and Ubuntu.

The community has also extended .NET to run on other platforms. Samsung ported .NET for their Arm-based Tizen platform. Red Hat and IBM ported .NET to LinuxONE/s390x. Loongson Technology ported .NET to LoongArch. We hope and expect that new partners will port .NET to other environments.

Unity Technologies has started a multi-year initiative to modernize their .NET runtime.

The .NET open source project is maintained and structured to enable individuals, companies, and other organizations to collaborate together in a traditional upstream model. Microsoft is the steward of the platform, providing both project governance and project infrastructure (like CI pipelines). The Microsoft team collaborates with organizations to help make them successful using and/or porting .NET. The project has a broad upstreaming policy, which includes accepting changes that are unique to a given distribution.

A major focus is the source-build project, which multiple organizations use to build .NET according to typical distro rules, for example Canonical (Ubuntu). This focus has expanded more recently with the addition of a Virtual Mono Repo (VMR). The .NET project is composed of many repos, which aids .NET developer efficiency but makes it harder to build the a complete product. The VMR solves that problem.

Summary

We’re several versions into the modern .NET era, having recently released .NET 7. We thought it would be useful if we summarized what we’ve been striving to build — at the lowest levels of the platform — since .NET Core 1.0. While we’ve clearly kept to the spirit of the original .NET, the result is a new platform that strikes a new path and offers new and considerably more value to developers.

Let’s end where we started. .NET stands for four values: Productivity, Performance, Security and Reliability. We are big believers that developers are best served when different language platforms offer different approaches. As a team, we seek to offer high productivity to .NET developers while providing a platform that leads in performance, security and reliability.

We plan to add more posts in this series. Which topics would you like to see addressed first? Please tell us in the comments. Would you like more of this “big picture” content?

If you want more of this content, you might check out Introduction to the Common Language Runtime (CLR).

This post was written by Jan Kotas, Rich Lander, Maoni Stephens, and Stephen Toub, with the insight and review of our colleagues on the .NET team.

39 comments

Discussion is closed. Login to edit/delete existing comments.

  • hitesh davey 1

    Even after so many advancements in the entire DOTNET ecosystem; it is still not easy to design, code, and deploy mobile & web apps like WinForms!
    If DotNET Team can simplify mobile/web app development then it makes sense to use dotnet core otherwise DotNet framework is still the way to go.

    One has to be a full-stack developer to develop hybrid mobile & web apps. This should change for sure otherwise the dotnet developers will diminish quickly the world over.

    there should be an option where one can design mobile form just by dragging and dropping controls and writing code without too much-paying attention to html,css,xml,xaml etc.

    • Richard LanderMicrosoft employee 0

      WinForms FTW! Let me tell you … our WinForms team loves WinForms. And it remains very popular.

  • hitesh davey 1

    the day when MS simplifies the design and development of web & mobile apps with pixel perfect WYSIWYG designer; the dotnet adoption and success will touch the sky high until then dotnet core adoption it is going to be challenging. The sooner dotnet team understnd this is better.

  • Igor Bagdamyan 3

    I think it’s also worth noting “vvvv gamma” from https://visualprogramming.net/
    It’s a .Net based visual programming editor made for designers. Things they showcase are really great.
    “vvvv gamma” creators gave a talk at DotNext few years ago. I wish you guys could give them more attention and invite to a .Net show. They had .Net fw editor and latest beta is .Net 6 I’m sure they have a lot of interesting stuff to share.

    Same with creators of C# based OS – https://github.com/CosmosOS/Cosmos
    I believe it’s worth mentioning this in a such a blog (even though it is not a commercial product or smth) and obviously have a talk with a team (or a maintainer) behind it.

    These projects show that .Net is even more than it might seem in a very cool way.

  • Jim Foye 1

    I have been doing .NET since almost the beginning, and will continue to do so for the rest of my career. I could not, however, in good conscience, recommend young people follow me in my footsteps. I find developing in the Microsoft ecosystem now to be like trying to build a house in an earthquake zone. You guys are, as I like to joke, in the “API of the month club”. I don’t know how much longer I can develop on WPF, which in my opinion is one of the best (though not perfect, of course) GUI frameworks ever created. You threw it in the trash can. That’s probably my chief pain point, but there are many others, which you guys should already know about, by talking to customers, scanning the bug reports, and reading comments from disaffected users on blog posts. At least your internal teams are happy. That’s great. They don’t pay the bills. Do a search on Google trends or SO on “C#”. See the trend? Don’t even do the same search for F#. It’s ugly. (I love and use F#, BTW). F# should be killing Python in data science. Why isn’t it? Why do so many people in the data science community have an allergy to your tools? Why do they prefer Python to F#, when clearly the latter is superior? I guess they don’t know the difference between [in] {out] [ref] [byref] and all that. I’m a C++ developer, and I struggle with that. Why don’t you do something about that? So, that might be one reason. Another reason, surely, is that they know that next year, Python will still be here. Will F# be?

  • Steve Naidamast 0

    It appears that my recent post spurred some good debate on the usefulness of the “old ways” via the original .NET Frameworks.

    As it regards my position on WebForms, I believe some of the commenters here made some very valid points. However, to accuse WebForms of not aligning with the Internet’s actual foundations demonstrates a lack of understanding of these very foundations.

    The Internet today is still working against 1970s technologies and is one of the reasons that many web development platforms have to jump through hoops to get quality applications functioning well. What the promoters of these new technologies to overcome such deficiencies do not understand is that there never has been a right way or a wrong way to develop applications with any type of legitimate framework.

    The Internet was designed as a simple messaging system within a closed-looped environment for purposes of being able to maintain some level of communications under disaster circumstances. Hence the lack of security features within its infrastructure.

    When the Internet was opened up for public\commercial use in the 1990s it was done so with these very same architectures and deficiencies and has not changed since.

    Thus, when one compares WebForms to lets say ASP.NET Blazor for example, neither can be considered the correct way or the incorrect way to develop. However, WebForms is much more reliant on the original post\get processes of the inherent messaging system than that of more recent technologies that attempt to bypass the passing of large caches of data. Though these newer ways may be more efficient, they are still predicated mostly on the original messaging systems aided by new support software.

    There is no way to escape this no matter what one attempts to do.

    One commenter made the point that the newer technologies provide greater choices for implementation than that of WebForms. This may be true but within Information Technology there is no such thing as a “free lunch”. For everything we do there is always a cost and it is these costs that are traded against what we want to accomplish and how want to do so.

    With the new technologies, the levels of complexity for development have become so enormous that many professionals are complaining about them and the lack of development standards. And I doubt that many who promote these more complex technologies have ever really worked under crushing deadlines in which such complexity has gotten in their way.

    I have…

    In one project I worked on a number of years ago, the project manager opted to use the latest ASP.NET MVC implementation to create a large-scale web application for a client of the agency. Yes, we got everything working and on time but the cost in terms of physical exhaustion from so much overtime was prohibitive. At the time
    there was nothing we did that good WebForms application couldn’t have done as well and for less money and less stress.

    The agency lost the client in part due to the selection of the technologies used as a result of their costs in time.

    Nothing has changed in this vein for today’s technologies.

    ASP.NET WebForms is a far more compartmentalized environment than that of the newer technologies and the fact that many developers did not understand how to take advantage of this is not a good excuse for WebForms eventual decline. The code-behind modules with WebForms were designed to do one thing only, deal directly with the interface. Everything else was supposed to be built into tertiary assemblies whether distributed or not.

    But what did most professionals do? The networking people at many companies became lazy and simply dumped everything on a web-application’s back-end server while developers simply push all their code into code-behind modules making them overly large and complex and subsequently, highly inefficient.

    This was proven by an excellent study in the 1990s on how to implement highly efficient client-server and web-applications that were aligned with the same back-ends.

    In the early 2000s, new paradigms were promoted for development, which promoted in turn the development of new technologies that many work with today with developing web applications. The result was the mess you see today.

    Microsoft, in its infinite stupidity, eventually gave up on WebForms around 2010 and started promoting the MVC paradigm, which by the way was already available with a third party project called “Monorails”. No one was interested but for some reason, Microsoft decided to upset the apple-cart and called their new version of MVC the “new” way forwards for web development and a small portion of the development community fell for it.

    True, the Java Community always used the MVC paradigm for their own web development projects, but this was done for entirely different reasons than that of marketing by Microsoft. Java environments were inherently designed to provide support for very large scale applications in enterprise environments and did so with a very unique for of its own version of code-behind modules called Java Servlets.

    With the move away from WebForms to the new MVC paradigm, all Microsoft did was promote new complexity into their environments while discarding the much easier to use WebForms environment.

    As I said earlier, WebForms in of itself suffers from its own inefficiencies but so do the newer technologies simply from their own complexities.

    I suggest that most of the people today who promote these newer technologies have never really worked in the development of heavy distributed applications. I have and nothing today can still beat the efficiencies of a well developed n-tiered system, no matter what technologies are used.

    As to performance efficiencies, the idea that one is going to make a high performing web application through software is complete nonsense. This was again proven in the 1990s with research done by analytical software engineers who understood that the only thing that can provide high performance for web applications is hardware and its proper configuration. Software will only get you so far and of course inefficient software will produce less adequate results.

    Studies done between well developed ASP.NET WebForms applications and those done with more modern technologies have actually shown that there is really little difference in such terms between the two.

    Now lest turn to security. I am not only a senior software engineer but a military historian-analyst as well. In the latter terms, I can categorically state that no matter what professional developers implement in terms of security, someone will always get through. You can prevent most casual attackers from harming your systems but preventing attacks from determined technicians is a near impossibility. This is due to two major factors. On the engineering side, as mentioned previously, the Internet had no security designed within it due to it reliance on outside security protections in a closed-loop environment. On the military side, we have the concept of siege theory that stipulates that only under very rare conditions will a besieged entity ever be successful against its besiegers.

    The Internet is under constant attack by casual attackers, criminals, and state actors. Anyone who believes that they can harden their websites against all of this has no understanding of the realities of such processes. It simply cannot be done no matter what software tools you use. It never has and it never will.

    What ever security implementation used, there is a hole in it that will be found and eventually exploited.

    The original intent of the WebForms environment was to make it much easier for developers to create web applications. And it succeeded beyond anyone’s wildest imagination.

    What we have today instead is an amalgamation of tool sets, frameworks, paradigms, and purist agendas that have created such a mix of complexity as to make such development far more costly and less efficient in the long run. And many in the profession are speaking up about this.

    No one today is creating anything new that we weren’t already capable of doing years ago with less sophisticated tools. We didn’t live in caves back then.

    True, there have been many advances in language features, database engine features, and the like but there has been little that could not be done with the older tools.

    Where advances have been made is in internals, scientific, and engineering software tools, not in general development where much is still reliant in infrastructure that is over 50 years old…

  • Jose Ventura 1

    Visual Basic developers will find an alternative in Python.

  • Dzianis Zhukouski 3

    This article covers the major features .NET provides but there is a problem. If I’m already .net developer I’m well aware about all of these features and you don’t have to convince me. But even in that case its hard to explain other developers why .net is the best.
    I would like this article contain more comparison tables with other competitors. Put the numbers on the table. Show the features that are unique to .net and what it gives in comparison to other solutions.
    Some metrics to compare:
    – Maturity​
    – Roadmap and Support​
    – Target Platforms​
    – Ability to Reuse Code & UI Between Platforms​
    – Community Support​
    – Developers Experience​
    – Accessing Native/System Capabilities​
    – Distribution and App Updates​
    – Time to market (TTM) & MVP Development​
    – Development Cost​
    – App Performance​
    – App Size​

Feedback usabilla icon