.NET has changed a lot since we kicked off the fast-moving .NET open-source and cross-platform project. We’ve re-thought and refined the platform, adding new low-level capabilities designed for performance and safety, paired with higher-level productivity-focused features. Span<T>
, hardware intrinsics, and nullable reference types are examples. We’re kicking off a new “.NET Design Point” blog series to explore the fundamentals and design choices that define today’s .NET platform, and how they benefit the code you are writing now.
This first post in the series provides a breadth overview of the pillars and the design-point of the platform. It describes “what you get” at a foundational level when you choose .NET and is intended to be a sufficient and facts-focused framing that you can use to describe the platform to others. Subsequent posts will go into more detail on these same topics since this post doesn’t quite do any of these features justice. This post doesn’t describe tools, like Visual Studio, nor does it cover higher-level libraries and application models like those provided by ASP.NET.
Follow up posts:
Before getting into the details, it is worth talking about .NET usage. It is used by millions of developers, to create cloud, client, and other apps on multiple operating systems and chip architectures. It is also run in some well-known places, like Azure, StackOverflow, and Unity. It is common to find .NET used in companies of all sizes, but particularly larger ones. In many places, it is a good technology to know to get a job.
.NET design point
The .NET platform stands for Productivity, Performance, Security, and Reliability. The balance .NET strikes between these values is what makes it attractive.
The .NET design point can be boiled down to being effective and efficient in both the safe domain (where everything is productive) and in the unsafe domain (where tremendous functionality exists). .NET is perhaps the managed environment with the most built-in functionality, while also offering the lowest cost to interop with the outside world, with no tradeoff between the two. In fact, many features exploit this seamless divide, building safe managed APIs on the raw power and capability of the underlying OS and CPU.
We can expand on the design point a bit more:
- Productivity is full-stack with runtime, libraries, language, and tools all contributing to developer user experience.
- Safe code is the primary compute model, while unsafe code enables additional manual optimizations.
- Static and dynamic code are both supported, enabling a broad set of distinct scenarios.
- Native code interop and hardware intrinsics are low cost and high-fidelity (raw API and instruction access).
- Code is portable across platforms (OS, chip architecture), while platform targeting enables specialization and optimization.
- Adaptability across programming domains (cloud, client, gaming) is enabled with specialized implementations of the general-purpose programming model.
- Industry standards like OpenTelemetry and gRPC are favored over bespoke solutions.
The pillars of the .NET Stack
The runtime, libraries, and languages are the pillars of the .NET stack. Higher-level components, like .NET tools and app stacks like ASP.NET Core, build on top of these pillars. The pillars have a symbiotic relationship, having been designed and built together by a single group (Microsoft employees and the open source community), where individuals work on and inform multiple of these components.
C# is object-oriented and the runtime supports object orientation. C# requires garbage collection and the runtime provides a tracing garbage collector. In fact, it would be impossible to port C# (in its complete form) to a system without garbage collection. The libraries (and also the app stacks) shape those capabilities into concepts and object models that enable developers to productively write algorithms in intuitive workflows.
C# is a modern, safe, and general-purpose programming language that spans from high-level features such as data-oriented records to low-level features such as function pointers. It offers static typing and type- and memory-safety as baseline capabilities, which simultaneously improves developer productivity and code safety. The C# compiler is also extensible, supporting a plug-in model that enables developers to augment the system with additional diagnostics and compile-time code generation.
A number of C# features have influenced or were influenced by state of the art programming languages. For example, C# was the first mainstream language to introduce async
and await
. At the same time, C# borrows concepts first introduced in other programming languages, for example by adopting functional approaches such as pattern matching and primary constructors.
The core libraries expose thousands of types, many of which integrate with and fuel the C# language. For example, C#’s foreach
enables enumerating arbitrary collections, with pattern-based optimizations that enable collections like List<T>
to be processed simply and efficiently. Resource management may be left up to garbage collection, but prompt cleanup is possible via IDisposable
and direct language support in using
.
String interpolation in C# is both expressive and efficient, integrated with and powered by implementations across core library types like string
, StringBuilder
, and Span<T>
. And language-integrated query (LINQ) features are powered by hundreds of sequence-processing routines in the libraries, like Where
, Select
, and GroupBy
, with an extensible design and implementations that support both in-memory and remote data sources. The list goes on, and what’s integrated into the language directly only scratches the surface of the functionality exposed as part of the core .NET libraries, from compression to cryptography to regular expressions. A comprehensive networking stack is a domain of its own, spanning from sockets to HTTP/3. Similarly, the libraries support processing a myriad of formats and languages like JSON, XML, and tar.
The .NET runtime was initially referred to as the “Common Language Runtime (CLR)”. It continues to support multiple languages, some maintained by Microsoft (e.g. C#, F#, Visual Basic, C++/CLI, and PowerShell) and some by other organizations (e.g. Cobol, Java, PHP, Python, Scheme). Many improvements are language-agnostic, which raises all boats.
Next, we’re going to look at the various platform characteristics that they deliver together. We could detail each of these components separately, but you’ll soon see that they cooperate in delivering on the .NET design point. Let’s start with the type system.
Type system
The .NET type system offers significant breadth, catering somewhat equally to safety, descriptiveness, dynamism, and native interop.
First and foremost, the type system enables an object-oriented programming model. It includes types, (single base class) inheritance, interfaces (including default method implementations), and virtual method dispatch to provide a sensible behavior for all the type layering that object orientation allows.
Generics are a pervasive feature that allow specializing classes to one or more types. For example, List<T>
is an open generic class, while instantiations like List<string>
and List<int>
avoid the need for separate ListOfString
and ListOfInt
classes or relying on object
and casting as was the case with ArrayList
. Generics also enable creating useful systems across disparate types (and reducing the need for a lot of code), like with Generic Math.
Delegates and lambdas enable passing methods as data, which makes it easy to integrate external code within a flow of operations owned by another system. They are a kind of “glue code” and their signatures are often generic to allow broad utility.
app.MapGet("/Product/{id}", async (int id) =>
{
if (await IsProductIdValid(id))
{
return await GetProductDetails(id);
}
return Products.InvalidProduct;
});
This use of lambdas is part of ASP.NET Core Minimal APIs. It enables providing an endpoint implementation directly to the routing system. In more recent versions, ASP.NET Core makes more extensive use of the type system.
Value types and stack-allocated memory blocks offer more direct, low-level control over data and native platform interop, in contrast to .NET’s GC-managed types. Most of the primitive types in .NET, like integer types, are value types, and users can define their own types with similar semantics.
Value types are fully supported through .NET’s generics system, meaning that generic types like List<T>
can provide flat, no-overhead memory representations of value type collections. In addition, .NET generics provide specialized compiled code when value types are substituted, meaning that those generic code paths can avoid expensive GC overhead.
byte magicSequence = 0b1000_0001;
Span<byte> data = stackalloc byte[128];
DuplicateSequence(data[0..4], magicSequence);
This code results in stack-allocated values. The Span<byte>
is a safe and richer version of what would otherwise be a byte*
, providing a length value (with bounds checking) and convenient span slicing.
Ref types and variables are a sort of mini-programming model that offers lower-level and lighter-weight abstractions over type system data. This includes Span<T>
. This programming model is not general purpose, including significant restrictions to maintain safety.
internal readonly ref T _reference;
This use of ref
results in copying a pointer to the underlying storage rather than copying the data referenced by that pointer. Value types are “copy by value” by default. ref
provides a “copy by reference” behavior, which can provide significant performance benefits.
Automatic memory management
The .NET runtime provides automatic memory management via a garbage collector (GC). For any language, its memory management model is likely its most defining characteristic. This is true for .NET languages.
Heap corruption bugs are notoriously hard to debug. It’s not uncommon that engineers spend many weeks if not months tracking these down. Many languages use a garbage collector as a user friendly way of eliminating these bugs because the GC ensures correct object lifetimes. Typically, GCs free memory in batches to operate efficiently. This incurs pauses that may not be suitable if you have very tight latency requirements, and the memory usage would be higher. GCs tend to have better memory locality and some are capable of compacting the heap making it less prone to memory fragmentation.
.NET has a self-tuning, tracing GC. It aims to deliver “hands off” operation in the general case while offering configuration options for more extreme workloads. The GC is the result of many years of investment, improving and learning from many kinds of workloads.
Bump pointer allocation — objects are allocated by incrementing an allocation pointer by the size needed (instead of finding space in segregated free blocks) so those allocated together tend to stay together. And since they are often accessed together this enables better memory locality which is important for performance.
Generational collections — it’s extremely common that object lifetimes follow the generational hypothesis, that an object either lives for very long or dies very quickly. So it’s much more efficient for a GC to only collect memory occupied by ephemeral objects most of time it runs (called ephemeral GCs), instead of having to collect the whole heap (called full GCs) every time it runs.
Compaction — the same amount of free space in larger and fewer chunks is more useful than in smaller and more chunks. During a compacting GC, survived objects are moved together so larger free spaces can be formed. This is harder to implement than a non-moving GC as it will need to update references to these moved objects. The .NET GC is dynamically tuned to perform compaction only when it determines the reclaimed memory is worth the GC cost. This means ephemeral collections are often compacting.
Parallel — GC work can run on a single thread or on multiple threads. The Workstation flavor does GC work on a single thread while the Server flavor does it on multiple GC threads so that it can finish much faster. The Server GC can also accommodate a larger allocation rate as there are multiple heaps the application can allocate on instead of only one, so it’s very good for throughput.
Concurrent — doing GC work while user threads are paused — called Stop-The-World — makes the implementation simpler but the length of these pauses may be unacceptable. .NET offers a concurrent flavor to mitigate that issue.
Pinning — the .NET GC supports object pinning, which enables zero-copy interop with native code. This capability enables high-performance and high-fidelity native interop, with limited overhead for the GC.
Standalone GC — a standalone GC with a different implementation can be used (specified via config and satisfying interface requirements). This makes investigations and trying out new features much easier.
Diagnostics — The GC provides rich information about memory and collections, structured in a way that allows you to correlate data with the rest of the system. For example, you can evaluate the GC impact of your tail latency by capturing GC events and correlating them with other events like IO to calculate how much GC is contributing vs other factors, so you can direct your efforts to the right components.
Safety
Programming safety has been one of the top topics of the last decade. It is an inherent component of a managed environment like .NET.
Forms of safety:
- Type safety — An arbitrary type cannot be used in place of another, avoiding undefined behavior.
- Memory safety — Only allocated memory is ever used, for example a variable either references a live object or is
null
. - Concurrency or thread safety — Shared data cannot be accessed in a way that would result in undefined behavior.
Note: The US Federal government recently published guidance on the importance of memory safety.
.NET was designed as a safe platform from its initial design. In particular, it was intended to enable a new generation of web servers, which inherently need to accept untrusted input in the world’s most hostile computing environment (the Internet). It is now generally accepted that web programs should be written in safe languages.
Type safety is enforced by a combination of the language and the runtime. The compiler validates static invariants, such as assigning unlike types — for example, assigning string
to Stream
— which will produce compiler errors. The runtime validates dynamic invariants, such as casting between unlike types, which will produce an InvalidCastException.
Memory safety is provided largely by cooperation between a code generator (like a JIT) and a garbage collector. Variables either reference live objects, are null
, or are out of scope. Memory is auto-initialized by default such that new objects do not use uninitialized memory. Bounds checking ensures that accessing an element with an invalid index will not allow reading undefined memory — often caused by off-by-one errors — but instead will result in a IndexOutOfRangeException.
null
handling is a specific form of memory safety. Nullable reference types is a C# language and compiler feature that statically identifies code that is not safely handling null
. In particular, the compiler warns you if you dereference a variable that might be null. You can also disallow null
assignment so the compiler warns you if you assign a variable from a value that might be null. The runtime has a matching dynamic validation feature that prevents null
references from being accessed, by throwing NullReferenceException.
This feature relies on nullable attributes in the library. It also relies on their exhaustive application within the libraries and app stacks such that user code can be provided with accurate results from static analysis tools.
string? SomeMethod() => null;
string value = SomeMethod() ?? "default string";
This code is considered null-safe by the C# compiler since null
use is declared and handled, in part by ??
, the null coalescing operator. The value
variable will always be non-null, matching its declaration.
There is no built-in concurrency safety in .NET. Instead, developers need to follow patterns and conventions to avoid undefined behavior. There are also analyzers and other tools in the .NET ecosystem that provide insight into concurrency issues. And the core libraries include a multitude of types and methods that are safe to be used concurrently, for example concurrent collections that support any number of concurrent readers and writers without risking data structure corruption.
The runtime exposes safe and unsafe code models. Safety is guaranteed for safe code, which is the default, while developers must opt-in to using unsafe code. Unsafe code is typically used to interop with the underlying platform, interact with hardware, or to implement manual optimizations for performance critical paths.
A sandbox is a special form of safety that provides isolation and restricts access between components. We rely on standard isolation technologies, like processes (and CGroups), virtual machines, and Wasm (with their varying characteristics).
Error handling
Exceptions are the primary error handling model in .NET. Exceptions have the benefit that error information does not need to be represented in method signatures or handled by every method.
The following code demonstrates a typical pattern:
try
{
var lines = await File.ReadAllLinesAsync(file);
Console.WriteLine($"The {file} has {lines.Length} lines.");
}
catch (Exception e) when (e is FileNotFoundException or DirectoryNotFoundException)
{
Console.WriteLine($"{file} doesn't exist.");
}
Proper exception handling is essential for application reliability. Expected exceptions can be intentionally handled in user code, otherwise an app will crash. A crashed app is more reliable and diagnosable than an app with undefined behavior.
Exceptions are thrown from the point of an error and automatically collect additional diagnostic information about the state of the program that is used with interactive debugging, application observability, and post-mortem debugging. Each of these diagnostic approaches rely on having access to rich error information and application state to diagnose problems.
Exceptions are intended for rare situations. This is, in part, because they have a relatively high performance cost. They are not intended to be used for control flow, even though they are sometimes used that way.
Exceptions are used (in part) for cancellation. They enable efficiently halting execution and unwinding a callstack that had work in progress once a cancellation request is observed.
try
{
await source.CopyToAsync(destination, cancellationToken);
}
catch (OperationCanceledException)
{
Console.WriteLine("Operation was canceled");
}
.NET design patterns include alternative forms of error handling for situations when the performance cost of exceptions is prohibitive. For example, int.TryParse
returns a bool
, with an out
parameter containing the parsed valid integer upon success. Dictionary<TKey, TValue>.TryGetValue
offers a similar model, returning a valid TValue
type as an out
parameter in the true
case.
Error handling, and diagnostics more generally, is implemented via low-level runtime APIs, higher-level libraries, and tools. These capabilities have been designed to support newer deployment options like containers. For example, dotnet-monitor can egress runtime data from an app to a listener via a built-in diagnostic-oriented web server.
Concurrency
Support for doing multiple things at the same time is fundamental to practically all workloads, whether it be client applications doing background processing while keeping the UI responsive, services handling thousands upon thousands of simultaneous requests, devices responding to a multitude of simultaneous stimuli, or high-powered machines parallelizing the processing of compute-intensive operations. Operating systems provide support for such concurrency via threads, which enable multiple streams of instructions to be processed independently, with the operating system managing the execution of those threads on any available processor cores in the machine. Operating systems also provide support for doing I/O, with mechanisms provided for enabling I/O to be performed in a scalable manner with many I/O operations “in flight” at any particular time. Programming languages and frameworks can then provide various levels of abstraction on top of this core support.
.NET provides such concurrency and parallelization support at multiple levels of abstraction, both via libraries and deeply integrated into C#. A Thread
class sits at the bottom of the hierarchy and represents an operating system thread, enabling developers to create new threads and subsequently join with them. ThreadPool
sits on top of threads, allowing developers to think in terms of work items that are scheduled asynchronously to run on a pool of threads, with the management of those threads (including the addition and removal of threads from the pool, and the assignment of work items to those threads) left up to the runtime. Task
then provides a unified representation for any operations performed asynchronously and that can be created and joined with in multiple ways; for example, Task.Run
allows for scheduling a delegate to run on the ThreadPool
and returns a Task
to represent the eventual completion of that work, while Socket.ReceiveAsync
returns a Task<int>
(or ValueTask<int>
) that represents the eventual completion of the asynchronous I/O to read pending or future data from a Socket
. A vast array of synchronization primitives are provided for coordinating activities synchronously and asynchronously between threads and asynchronous operations, and a multitude of higher-level APIs are provided to ease the implementation of common concurrency patterns, e.g. Parallel.ForEach
and Parallel.ForEachAsync
make it easier to process all elements of a data sequence in parallel.
Asynchronous programming support is also a first-class feature of the C# programming language, which provides the async
and await
keywords that make it easy to write and compose asynchronous operations while still enjoying the full benefits of all the control flow constructs the language has to offer.
Reflection
Reflection is a “programs as data” paradigm, allowing one part of a program to dynamically query and/or invoke another, in terms of assemblies, types and members. It is particularly useful for late-bound programming models and tools.
The following code uses reflection to find and invoke types.
foreach (Type type in typeof(Program).Assembly.DefinedTypes)
{
if (type.IsAssignableTo(typeof(IStory)) &&
!type.IsInterface)
{
IStory? story = (IStory?)Activator.CreateInstance(type);
if (story is not null)
{
var text = story.TellMeAStory();
Console.WriteLine(text);
}
}
}
interface IStory
{
string TellMeAStory();
}
class BedTimeStore : IStory
{
public string TellMeAStory() => "Once upon a time, there was an orphan learning magic ...";
}
class HorrorStory : IStory
{
public string TellMeAStory() => "On a dark and stormy night, I heard a strange voice in the cellar ...";
}
This code dynamically enumerates all of an assembly’s types that implement a specific interface, instantiates an instance of each type, and invokes a method on the object via that interface. The code could have been written statically instead, since it’s only querying for types in an assembly it’s referencing, but to do so it would need to be handed a collection of all of the instances to process, perhaps as a List<IStory>
. This late-bound approach would be more likely to be used if this algorithm loaded arbitrary assemblies from an add-ins directory. Reflection is often used in scenarios like that, when assemblies and types are not known ahead of time.
Reflection is perhaps the most dynamic system offered in .NET. It is intended to enable developers to create their own binary code loaders and method dispatchers, with semantics that can match or diverge from static code policies (defined by the runtime). Reflection exposes a rich object model, which is straightforward to adopt for narrow use cases but requires a deeper understanding the .NET type system as scenarios get more complex.
Reflection also enables a separate mode where generated IL byte code can be JIT-compiled at runtime, sometimes used to replace a general algorithm with a specialized one. It is often used in serializers or object relational mappers once the object model and other details are known.
Compiled binary format
Apps and libraries are compiled to a standardized cross-platform bytecode in PE/COFF format. Binary distribution is foremost a performance feature. It enables apps to scale to larger and larger numbers of projects. Each library includes a database of imported and exported types, referred to as metadata, which serves a significant role for both development operations and for running the app.
Compiled binaries include two main aspects:
- Binary bytecode — terse and regular format that skips the need to parse textual source after compilation by a high-level language compiler (like C#).
- Metadata — describes imported and exported types, including the location of the byte code for a given method.
For development, tools can efficiently read metadata to determine the set of types exposed by a given library and which of those types implement certain interfaces, for example. This process makes compilation fast and enables IDEs and other tools to accurately present lists of types and members for a given context.
For runtime, metadata enables libraries to be loaded lazily, and method bodies even more so. Reflection (discussed later) is the runtime API for metadata and IL. There are other more appropriate APIs for tools.
The IL format has remained backwards-compatible over time. The latest .NET version can still load and execute binaries produced with .NET Framework 1.0 compilers.
Shared libraries are typically distributed via NuGet packages. NuGet packages, with a single binary, can work on any operating system and architecture, by default, but can also be specialized to provide specific behavior in specific environments.
Code generation
.NET bytecode is not a machine-executable format, but it needs to be made executable by some form of code generator. This can be achieved by ahead-of-time (AOT) compilation, just-in-time (JIT) compilation, interpretation, or transpilation. In fact, these are all used today in various scenarios.
.NET is most known for JIT compilation. JITs compile methods (and other members) to native code while the application is running and only as they are needed, hence the “just in time” name. For example, a program might only call one of several methods on a type at runtime. A JIT can also take advantage of information that is only available at runtime, like values of initialized readonly static variables or the exact CPU model that the program is running on, and can compile the same method multiple times in order to optimize each time for different goals and with learnings from previous compilations.
JITs produce code for a given operating system and chip architecture. .NET has JIT implementations that support, for example, Arm64 and x64 instruction sets, and Linux, macOS, and Windows operating systems. As a .NET developer, you don’t have to worry about the differences between CPU instruction sets and operating system calling conventions. The JIT takes care of producing the code that the CPU wants. It also knows how to produce fast code for each CPU, and OS and CPU vendors often help us do exactly that.
AOT is similar except that the code is generated before the program is run. Developers choose this option because it can significantly improve startup time by eliminating the work done by a JIT. AOT-built apps are inherently operating system and architecture specific, which means that extra steps are required to make an app run in multiple environments. For example, if you want to support Linux and Windows and Arm64 and x64, then you need to build four variants (to allow for all the combinations). AOT code can provide valuable optimizations, too, but not as many as the JIT in general.
We’ll cover interpretation and transpilation in a later post, however, they also play critical roles in our ecosystem.
One of the code-generator optimizations is intrinsics. Hardware intrinsics are an example where .NET APIs are directly translated into CPU instructions. This has been used pervasively throughout .NET libraries for SIMD instructions.
Interop
.NET has been explicitly designed for low-cost interop with native libraries. .NET programs and libraries can seamlessly call low-level operating system APIs or tap into the vast ecosystem of C/C++ libraries. The modern .NET runtime is focused on providing low-level interop building blocks such as the ability to call native methods via function pointers, exposing managed methods as unmanaged callbacks or customized interface casting. .NET is also continually evolving in this area and in .NET 7 released source generated solutions that further reduced overhead and were AOT friendly.
The following demonstrates the efficiency of C# functions pointers with the LibraryImport
source generator introduced in .NET 7 (this source generator support layers on top of the DllImport
support that’s existed since the beginning of .NET).
// Using a function pointer avoids a delegate allocation.
// Equivalent to `void (*fptr)(int) = &RegisterCallback;` in C
delegate* unmanaged<int, void> fptr = &RegisterCallback;
RegisterCallback(fptr);
[UnmanagedCallersOnly]
static void Callback(int a) => Console.WriteLine($"Callback: {a}");
[LibraryImport("...", EntryPoint = "RegisterCallback")]
static partial void RegisterCallback(delegate* unmanaged<int, void> fptr);
Independent packages provide higher-level domain-specific interop solutions by taking advantage of these low-level building blocks, for example ClangSharp, Xamarin.iOS & Xamarin.Mac, CsWinRT, CsWin32 and DNNE.
These new features don’t mean built-in interop solutions like built-in runtime managed/unmanaged marshalling or Windows COM interop aren’t useful — we know they are and that people have come to rely upon them. Those features that have been historically built into the runtime continue to be supported in the .NET runtime. However, they are for backward compatibility only, with no plans to evolve them further. All future investments will be focused on the interop building blocks and in the domain-specific solutions that they enable.
Binary distributions
The .NET Team at Microsoft maintains several binary distributions, more recently supporting Android, iOS, and Web Assembly. The team uses a variety of techniques to specialize the codebase for each one of these environments. Most of the platform is written in C#, which enables porting to be focused on a relatively small set of components.
The community maintains another set of distributions, largely focused on Linux. For example,.NET is included in Alpine Linux, Fedora, Red Hat Enterprise Linux, and Ubuntu.
The community has also extended .NET to run on other platforms. Samsung ported .NET for their Arm-based Tizen platform. Red Hat and IBM ported .NET to LinuxONE/s390x. Loongson Technology ported .NET to LoongArch. We hope and expect that new partners will port .NET to other environments.
Unity Technologies has started a multi-year initiative to modernize their .NET runtime.
The .NET open source project is maintained and structured to enable individuals, companies, and other organizations to collaborate together in a traditional upstream model. Microsoft is the steward of the platform, providing both project governance and project infrastructure (like CI pipelines). The Microsoft team collaborates with organizations to help make them successful using and/or porting .NET. The project has a broad upstreaming policy, which includes accepting changes that are unique to a given distribution.
A major focus is the source-build project, which multiple organizations use to build .NET according to typical distro rules, for example Canonical (Ubuntu). This focus has expanded more recently with the addition of a Virtual Mono Repo (VMR). The .NET project is composed of many repos, which aids .NET developer efficiency but makes it harder to build the a complete product. The VMR solves that problem.
Summary
We’re several versions into the modern .NET era, having recently released .NET 7. We thought it would be useful if we summarized what we’ve been striving to build — at the lowest levels of the platform — since .NET Core 1.0. While we’ve clearly kept to the spirit of the original .NET, the result is a new platform that strikes a new path and offers new and considerably more value to developers.
Let’s end where we started. .NET stands for four values: Productivity, Performance, Security and Reliability. We are big believers that developers are best served when different language platforms offer different approaches. As a team, we seek to offer high productivity to .NET developers while providing a platform that leads in performance, security and reliability.
We plan to add more posts in this series. Which topics would you like to see addressed first? Please tell us in the comments. Would you like more of this “big picture” content?
If you want more of this content, you might check out Introduction to the Common Language Runtime (CLR).
This post was written by Jan Kotas, Rich Lander, Maoni Stephens, and Stephen Toub, with the insight and review of our colleagues on the .NET team.
This article covers the major features .NET provides but there is a problem. If I'm already .net developer I'm well aware about all of these features and you don't have to convince me. But even in that case its hard to explain other developers why .net is the best.
I would like this article contain more comparison tables with other competitors. Put the numbers on the table. Show the features that are unique to .net...
Visual Basic developers will find an alternative in Python.
It appears that my recent post spurred some good debate on the usefulness of the "old ways" via the original .NET Frameworks.
As it regards my position on WebForms, I believe some of the commenters here made some very valid points. However, to accuse WebForms of not aligning with the Internet's actual foundations demonstrates a lack of understanding of these very foundations.
The Internet today is still working against 1970s technologies and is one of the...
I have been doing .NET since almost the beginning, and will continue to do so for the rest of my career. I could not, however, in good conscience, recommend young people follow me in my footsteps. I find developing in the Microsoft ecosystem now to be like trying to build a house in an earthquake zone. You guys are, as I like to joke, in the "API of the month club". I don't know how...
I think it's also worth noting "vvvv gamma" from https://visualprogramming.net/
It's a .Net based visual programming editor made for designers. Things they showcase are really great.
"vvvv gamma" creators gave a talk at DotNext few years ago. I wish you guys could give them more attention and invite to a .Net show. They had .Net fw editor and latest beta is .Net 6 I'm sure they have a lot of interesting stuff to share.
Same with...
the day when MS simplifies the design and development of web & mobile apps with pixel perfect WYSIWYG designer; the dotnet adoption and success will touch the sky high until then dotnet core adoption it is going to be challenging. The sooner dotnet team understnd this is better.
Even after so many advancements in the entire DOTNET ecosystem; it is still not easy to design, code, and deploy mobile & web apps like WinForms!
If DotNET Team can simplify mobile/web app development then it makes sense to use dotnet core otherwise DotNet framework is still the way to go.
One has to be a full-stack developer to develop hybrid mobile & web apps. This should change for sure otherwise the dotnet developers will diminish...
WinForms FTW! Let me tell you … our WinForms team loves WinForms. And it remains very popular.
Good article! It just confirms that I should stay on my old lovely .NET Framework 4.8 and WinForms/WPF.
Come to Core, Daite. It’s nice over here. We have cake.
I am glad you are happy.
Much of this post applies to .NET Framework, too, for example on type and memory safety. Key details don’t apply.
I've made this point many times, but as Microsoft is well aware, a key area of development interest the past years has been around data science. And C#/.NET remains a 2nd or 3rd rate platform to do data science. The main reason being because Microsoft refuses to commit 1 or 2 developers to work full-time on actually finishing the DataFrame that has been languishing in development h*ll for the past few years. It is beyond...
As someone who tried some data science development on .NET and reverted back to Python—even though it performs far worse—I agree with this point.
.Net since it opened up with .Net Core and later has certainly improved the foundation of platform and the ecosystem a lot, especially it's change to run on any major OS.
1 But one thing that comes of as completely lacking is interoperability with industry solutions from other vendors, when coming from other open platforms this lack of first party support for such solutions seems odd.
Main example in EF Core: Lack of first party support for...
EF core does support MySQL and MariaDB (there are two different providers for them). There are also providers for PostgreSQL and Oracle. You’re right in that there is not one for Amazon Aurora specifically, but you can try the PostgreSQL one to see if it works for that.
You can see the full list of providers here: https://learn.microsoft.com/en-us/ef/core/providers/?tabs=dotnet-core-cli
You are right, there are (probably good) options, but my point is that they are supported by third party. In other languages / frameworks it's supported directly with the OR/M / Framework / language in question. Typically the vendors behind these solutions help develop / maintain the given provider together with the maintainers (EF team in this case).
Main technical benefit is that when there is a new release, no providers are left behind, they are...
In fact, the Npgsql(open source ADO.NET Data Provider for PostgreSQL) lead developer is on the Entity Framework team.
It’s a fantastic project which even includes support for logical decode.
Nice, I didn’t know that. I’ll have a closer look 👍