Sunday, July 6, 2008

Microsoft.Net Frequently Asked Questions

Is NGWS Runtime equivalent to the CLR?

Yes. -- Brian Harry, Microsoft.

Has .NET replaced the term NGWS?

Yes. -- Brian Harry, Microsoft.

A very complete list of "old" names (like NGWS) and "new" names (like Visual Studio .NET) is at -- Robert Scoble.

Here's the list of the most common ones:


Next version of Windows 2000/ NT
Microsoft Windows .NET

.NET (refers to the overall strategy and vision, not to products)

COM+ 2.0
Microsoft .NET Framework

Web Services Platform
Microsoft .NET Framework

URT (Universal Runtime)
Microsoft .NET Framework (when referring to the runtime, framework, and ASP+); Common Language Runtime (when referring specifically to the runtime)

COM+ Runtime
Common Language Runtime or CLR on second attribution

Microsoft .NET Compact Framework

Execution engine
Common Language Runtime or CLR on second attribution

Common Language Runtime or CLR on second attribution

Visual Studio 7.0 or VS 7.0
Visual Studio .NET or VS.NET on second attribution (Waggener Edstrom claims that there is no space between "Visual Studio" and ".NET" but on Microsoft's Web site it seems that they've decided to use "Visual Studio .NET" (with a space) so that's what I'm going with until Microsoft changes its usage).

Visual Basic 7.0 or VB 7.0
Visual Basic .NET or VB.NET on second attribution

Since all .NET applications, regardless of language, share the Common Language Runtime (CLR). Are there any estimates as to the performance differences across languages?

By the time the CLR gets to execute your code, it is all in IL format so there will be no difference in performance regardless of what language it was originally written in (even COBOL). So really the choice of what language to use comes down to personal preference.

Of course most people will not allow programmers to choose to use whatever language suits their fancy, but as far as performance goes, there should be no difference. -- Ron Jacobs, Program Manager COM+, Microsoft.

Ron Jacob's answer is correct in theory, Brian Harry's response (below) is more accurate. In theory, all compilers that emit IL will do so equally efficiently and therefore have the same performance. In practice, this probably won't be the case, as some compilers might emit more optimized IL than others. -- Robin Maffeo, .NET Frameworks performance, Microsoft.

If that's the case, then how will a .NET applications performance compare to a similar application written in ATL or even VB?

Yes, the performance of managed code should be similar across languages. In order to answer Steve's original question, we will need to compare the performance of managed code against unmanaged code (because an ATL COM server runs unmanaged). Microsoft was putting up statistics at the PDC claiming only a 10% performance degradation at this point in managed code vs. unmanaged, with a few tests showing as much as a 40% degradation (and the speaker claimed they'd be working on that). But the claim was also made that the new XML parser written in C# is as fast or faster than the MSXML XML parser. The claim here was you write better code in C# than in C++. I don't think we're going to have definitive answers on this until we have at least a beta version of the platform. -- Eric Hill.

Depends on the application (doesn't it always? ;). It's really not .NET vs other, it's the more generic Runtime (GC/JIT/Managed) vs. other. Depending how your application is designed and used. There's just too many factors that vary depending on the application to say which is better. The things that will differ the most between a runtime or other application are:

- load-time
- execution-time
- memory footprint
- etc.

--Drew Marsh

You should see an overall increase in performance over VB6 for most apps. One of the biggest effects is that the GC allocator is WAY better than the BSTR allocator. However (for V1) the VC compiler will produce better performing code than the VB compiler. The VC compiler runs the IL through the same optimizer that it runs x86 code through and applies many of the same optimizations. Some of the machine dependent optimizations can not be performed because the result can't be represented in IL. However many optimizations can be: common subexpression elimination, induction variables, loop unrolling, etc. Btw, somebody else pointed out to me that another huge perf advantage VB 7 has over VB 6 is all of the VARIANT conversions that you do in VB 6 simply go away in VB.NET. -- Brian Harry, Microsoft.

How does threading work in .NET?

There is a System.Threading package providing classes such as Thread, Mutex, ThreadPool, etc. Some languages provide native mechanisms for creating/synchronizing threads, such as C#'s "lock" statement. To create a thread, a Delegate (function pointer in CLR) is passed to the constructor of the Thread class. -- Drew Marsh.

Apartment threading is dead, all their new code runs in the MTA, and there is minimal synchronization in the core. There is obviously synchronization in a few places, such as on the output buffers for Console.Write(). But to take another example, their Hashtable is not, by default, synchronized (contrast with early Java). If you want synchronized behavior, you call a method on Hashtable which returns a synchronized wrapper class. Thread safety in .NET is achieved by various higher level mechanisms. I don't know about support classes for explicit multithreaded programming --haven't looked. -- Jeff Berkowitz.

Are there thread classes? Are all of the .NET classes in the CLR thread safe?

The System.* base classes are NOT *all* thread safe. This was done because threading depends on the applications needs and not the necessarily the component. You can severely cripple an application by assuming they always want to lock/unlock around a certain call to your component.

So for example the collection classes. If .Add were synchronized by the class and I wanted to add 5000 objects to the collection in one batch... I can control that by synchronizing on the collection instance in my application. -- Drew Marsh.

Are the .NET system packages implemented as COM objects?

No. They're implemented as .NET classes. COM interop is provided by the run-time. If you've ever read anything about how Java/COM worked with attributes on native entities, you already understand it. -- Drew Marsh.

Apparently it's not COM under the hood. "The .NET platform, like the JVM, provides a virtual environment for running programs. There is a byte-code based intermediate language, a built-in garbage collector, a byte-code verifier, classes, methods,interfaces and so on. This environment is called the Common Language Runtime (this name has not been around long enough to be shortened yet). .NET differs from Microsoft's previous efforts in building cross- language component architecture (COM/DCOM). While COM (like CORBA) allows you to invoke methods/functions from one language to another, the common runtime allows data-level interoperability. The difference is that with COM/CORBA you modify objects through their interfaces, so everything is done via function calls (possibly many of them). In contrast, to modify an object in the common runtime you can just change it directly, since each language uses the same data representation, same address space and same garbage collector (when performing remote function calls the situation becomes more similar to DCOM or CORBA)." -- Paul Westcott, quoting from an e-mail he received from Mercury Interactive which has announced a .NET version of Mercury (read information/dotnet/mercury_and_dotnet.html for more details on Mercury).

Are still all (or most) things IUnknown, at least at the binary-component -level boundary?

No. IUnknown is used for COM interop only. The architecture is similar to the JCW/CCWs from J/ Direct in the MSJVM. -- Don Box.

No... there is no COM under the hood. COM interop is possible, but .NET components run right up against the metal. COMs core, v-tables, are replaced by the Virtual Object System (VOS) in the CLR. Like IUnknown, all classes support a base set of operations because all classes subclass System.Object. -- Drew Marsh.

Are GUIDs still underneath the hierarchical naming system? Or is it a totally different approach that simply supports these for interoperating with the current stuff?

Many of the old world ideas remain, but most of the implementation details are different. If you groked J/Direct and Java/COM from the MSJVM, you will feel right at home. -- Don Box.

No.. it's Namespaces/Class/Struct *names*. Multiple versions of the same class can co-exist effortlessly. -- Drew Marsh.

From the PDC slides I've seen, COM as we know it today (IUnknown, IDispatch etc) seems to have been pushed into the CLR. From the PDC slide 3-211"The .NET Framework, A COM developers perspective", there also seems to be no registration. So my question is: If I develop two components, one in VB and one in C#, do I still register then in the Component Services MMC ? Does the registry still exist, or how do I know what components are available on my machine ? Do all .NET components run with COM+ 1.x ? Jeff Weber responded: More accurately, I think you could say COM was replaced by the CLR rather than being "pushed into" the CLR.

Until CreateProcess supports assemblies natively, a certain amount of COM goo lives around the edges of the CLR. As was the case with J/Direct, as long as all references are to CLR-managed objects, no IUnknown is involved. Once a classic COM object is imported, an RCW thunks the IL invocations down to our beloved vtable format. CCWs are used going the other way, but as was the case with J/Direct, to CCW exists UNTIL a classic COM reference is needed, at least as far as I can tell. Also, hosting an AppDomain from unmanaged code is achieved using CoCreateInstance and classic COM interfaces. So, I would say that CLR is a new object model and runtime that supports COM probably better than any other runtime in use today. -- Don Box.

Correct, no registration. -- Jeff Weber.

Be careful. Shared assemblies wind up being "registered" in the shared assembly cache. Also, if you want classic COM code to be able to call CoCreateInstance, you need to use REGASM.exe to get into HKEY_CLASSES_ROOT. -- Don Box.

All .NET objects are self describing. They contain their own meta-data. -- Jeff Weber.

Right. This was true of classic COM as well. The primary advantage to CLR assemblies is that (a) we have IMPORT info as well as EXPORT info and (b) the fields of a class are known to the runtime. Both of these are super useful. -- Don Box.

I believe we were told, in response to someone's question, that for now the COM+ App Services would could still be used with the .NET components for transactions and other COM+ type stuff. I have no Idea how this works uner the hood and it is my guess that Com+ App services will be replaced by .NET App Services. Probably in the Whistler release of Windows. -- Jeff Weber

There is a tool (REGSVCS.exe) that turns your managed class into a COM+ 1.0 configured component. Most of the classic COM+ 1.0 services are still implemented in the unmanaged IUnknown-based world. Some services (thread affinity and synchronization) also have managed implementations. -- Don Box.

Microsoft's Joe Long Responds:

The real story is that the services that you use in COM+ are exactly the same services you’ll use with .NET and the CLR. This is true for V1 and as far in the future as we can see. That means that the same team that brought you MTS 1, MTS2, and COM+ 1.0 is going to be doing the services for .NET. The services will not be replaced by “.NET App Services” – unless we change the name!

There will be an evolution of the services – in the sense that we want to make using the services as natural as using anything else in the CLR. You should also expect to see new and innovative services (I know that is an overloaded term -- here I use it to mean “component services” e.g. behavior applied to components on your behalf such as pooling, queuing, transactions, etc.)

For the Whistler release of Windows, we have a bunch of new services that we talked about at the PDC. The work we have done to make it better in the CLR include classes for accessing every single COM+ thing we could think of – its my expectation that if you can interact with context from VB or C++, you’ll be able to do the same using these new classes. You’ll also get attributes for all of the COM+ services…so you’ll be able to mark a class as Requires Transaction, or a method as AutoDone, etc. We also wrote some code that does automatic registration when you new a “configured” (aka “uses COM+ Services”) class that hasn’t been registered with COM+…all of the meta data has been compiled in, and we simply do the registration for you based on the tags that you’ve set.

That being said, we know there are things that we’d like to make better, and we will…such as that runtime configuration (e.g. where do you set the username/password that the process runs under?)

Finally, I would be leery of saying “COM was replaced by the CLR”…you have to really define COM when you say something like that. Certainly the programming model in COM (ie interfaces, procedure based programming), and the services in COM aren’t going away or being replaced. The infrastructure will evolve into the CLR (ie the COM APIs will be superceded), but COM isn’t going anywhere anytime soon. There are hundred of millions (literally!) of COM customers. Apps like Microsoft Office and every other personal productivity app won’t run without COM. COM will be around as long as Windows is around (in fact Windows won’t boot without COM) What we are trying to do with CLR (among other things) is give you language choice, a much better development environment, and a platform that we think will make it easier for you to build your applications. -- Joe Long.

What's the mechanism for accessing Win32 functions from a .NET application?

P/Invoke works similarly to J/Direct from the old MSJVM. Very simple to use, yet fairly flexible and extensible. -- Don Box.

How much of Win32 is abstracted by .NET packages?

Lots. -- Don Box.

I gather that anything you'd want from the OS on a regular basis has been encapsulated by .NET including thread synchronization, file access, database access, the GDI, windowing ... mostly everything. To me this looks like someone committed to .NET should stay away from Win32 and really program to the framework .... who knows, next time you look it runs on Solaris.

Still, VC++.NET and even C# should be able to integrate any type of legacy code (including, of course, the Win32 API). There's an "unsafe" declaration that let's you write straight C code in C# and that will most likely allow you to access anything that you've called before. -- Clemens F. Vasters.

Is there a spec for MSIL on the Web somewhere?

Not yet I don't think. Tons of docs came with the NGWS SDK preview. It's very interesting stuff. I'm keeping my eye on MSDN for updates. -- Drew Marsh.

I hear people saying that C# is so much similar to Java. Did anyone do a feature comparison and analyze what is better in C# than in Java?

Off the top of my head, Java language (or VM) does not have:

P/Invoke (JNI is clunky in comparison)
Value types (all "objects" in JVM are on the GC heap)
Enumerations (common complaint)
Extensible metadata (cannot tag classes/methods in a class file
without getting sued)
An extensible loader (java.lang.ClassLoader is way too
An intrinsic emitter library for generating types on the fly
An extensible context model
An intrinsic XML-based serializer
4,000,000 VB developers targeting the same runtime

I am probably missing something. -- Don Box.

C# is sorta like Java, but not really. In C# properties, callbacks (delegates), and events (multi- cast delegates) are all first class citizens. Not to mention that due to the fact that C# has value types... which is the major factor in separating the two. -- Drew Marsh.

A couple other things Java does not have that C# does:

Preprocessor definitions and conditional compilation (but no macros in either Java or C#, good riddance). Parameters can be passed by reference (both primitive types and object references it appears), plus you can designate "out" parameters. -- Eric Hill.

Does VC++.NET have access to the CLR (Common Language Runtime)? If so, through what mechanism?

Yes, you can compile IL code but it becomes "unsafe" code. That means it's not managed and you need a policy to load the unmanaged code but it works. -- Patrik Torstensson.

Yes, via a technology called Managed Extensions. Essentially they're ANSI-compliant extensions like:

__value -- creates a new value type
__box -- enables conversion to/from value types
__interface -- allows you to create a true interface
__gc -- marks things as garbage collectable
__sealed -- marks something as non-extensible
__property -- creates a .NET property
__delegate -- creates a .NET delegate
__value -- a version of dynamic_cast that may throw an exception
__try_cast -- throws an exception if the cast is illegal
__transient -- prevents a data member from being serialized
__serializeable -- marks a class as being serializable by the CLR
__finally -- declares a finally block
__abstract -- declares a class that cannot be directly instantiated
(i.e. must be subclassed to be used)
__pin -- prevents a managed object from being moved by the CLR
__nogc -- declares a native C++ class that is not GCd

-- Drew Marsh

Are there .NET equivulents to Win32 messaging routines (i.e.PostMessage, SendMessage, WaitForMultipleObjects, etc.) that can be used for inter-thread communication. Or would I have to call a method on an object running on another thread?

There exists some threading functions like the (WaitForMultipleObjects) but no real IPC. You have the ChannelService that is used for remoting but I think you could use that for IPC also. You also have storage on the appdomain and could therefore use a shared monitor (mutex like) and put data into the storage. Several ways of solving the problem of IPC. -- Patrik Torstensson.

How will the future for Win32 API/Win64 API/Platform SDK will pan out? At the moment, all applications and components (and .NET itself) sit about these, so I assume Microsoft will maintain them in versions of Windows well into the future (Blackcomb, and its successors). But does Microsoft plan to stop further development of them, and concentrate solely on the .NET framework, or will Win32/Win64/Platform SDK continue to be .NET's foundation and will it be extended?

At the moment, all applications and components (and .NET itself) sit above these, so I assume Microsoft will maintain them in versions of Windows well into the future (Blackcomb, and its sucessors). But does Microsoft plan (a) to stop futher development of them and concentrate solely on the .NET framework, or (b) will Win32/Win64/Platform SDK continue to be .NET's foundation and will it be seriously extended by Microsoft and independently accessible by third- party apps, and would it still be a sensible target for more demanding / performance-hungry apps? -- Eamon O'Tuathail.

Microsoft has full commitment to continue to develop and support the Win32/Win64 api set. We will continue to enhance and expose functionality through existing API sets. Under no circumstance would we abandon that incredibly important customer base. We will continue to advance the .Net API set. It will, in all likelihood, be better integrated with our tools because it is a higher level abstraction and was designed from the ground up to have the necessary facilities to enhance the design time experience. It is also worth noting that, in virtually every circumstance, the .Net functionality is implemented by using the underlying Win32/Win64 functionality. -- Brian Harry, Microsoft.

If Win32/Win64/Platform SDK are not to be extended, where does that leave the long- term prospects for ATL Server?

Is there a point sometime way into the future (5 years +), when Microsoft will say "well, everyone is programming to .NET, and we can change what is underneath it, by removing Win32/Win64/Win128" (much like DOS was removed from Windows after most developers moved completed to the Windows Win16/Win32 APIs). -- Eamon O'Tuathail.

Are Webservices tied down to using SOAP. Can you use a home-grown XML-RPC and still use the support provided by MS for WebServices?

At the PDC they demo'd webservices using SOAP, HTTP GET and HTTP POST (depending on the complexity of the parameters). The framework remoting also has pluggable protocols (like the Plugable Channel Architecture). -- Richard Blewett.

With the pending release of Visual Studio 7, ATL Server, Win Forms, etc., should there be any concern about investing time, money, and training in these technologies? Is the .NET and C# a complimentary technology set, or are there plans to move away from C++/ATL, and do everything in C#, etc (i.e. multi-threaded backend server stuff)?

What you learn by understanding COM(+) is completely transferrable to understanding .NET. Some concepts go away and some new ones are introduced. IMHO, it's really important to view .NET as the next *evolution* of COM(+). -- Drew Marsh.

Does anyone know how one can place an assembly in the global assembly section? I doubt it's just copying a file to c:\winnt\assembly.

First you have to create a key with the sn.exe utility. Then you run al.exe with the /install and / keyfile switches. I don't have VS7 - I assume it has a GUI method to do so. -- Jon Flanders.

You will find a tool named AL.exe as part of the sdk install AL had a /I: option this can be used to install Assemblies in the Global Assembly cache -- Craig Schertz, Microsoft.

If an object is garbage collected between two calls, would it not mean that it has to be stateless, or do you save state somewhere?

If the calls are eons apart, yes the server object will get garbage collected. There are several different options that a developer can adopt in such a case.

(1) Keep the server object alive at the server process by holding on to it. This is typically done for objects which expose a well known service and are expected to stay alive for a long time. Singleton objects are another variation of this.

(2) Ping the server object periodically from the client (note that this is an explicit ping, different from an implicit ping on call)

(3) You can register sponsors who can ping the server on your behalf. Our model supports both stateless and stateful programming. The choice is left to the developer based on the scenario. - - Tarun Anand, Microsoft.

Sponsors actually don't use ping, they work as follows:

The client registers a sponsor with the lease. When the lease expires the lifetime service calls the sponsor's renew method. The sponsor returns with a lease renewal time. If the sponsor doesn't return in a specified amount of time, it is removed from the lease and another sponsor is called. When all the sponsors either fail to respond or don't specify a positive renewal time, the lease expires and the remote object is disconnected.

The Ping Tarun talks about below is actually a lease renew method call. The client can either implicitly renew the lease by invoking a method on the object, or explicitly renew the lease by specifying a lease renewal time. The lease renewal time is not directly added to the current lease time. The lease time is only increased if the renewal time added to the current clock time is greater then the current lease time. All the initial times for lifetime management can be specified by the user for the appdomain or for a particular object. These times are as follows:

LeaseManagerPoll timespan
Initial Lease timespan
implicit lease renewal timespan
Sponsor time out timespan

If there is no lease on an object, then it has an infinite lifetime.

The defaults are

Initial leasetime 5 minutes
Renew on call time 2 minutes
Sponsorship time out 2 minutes
LeaseManagerPoll time 10 seconds.

These can be changed by use of the .cfg files or by overriding the InitialLifetimeService service method on an object which inherits from MarshalByRefObject. Also the MarshalByRefObject.GetLifetimeService() will return a reference to the object's lease. The reference to the lease can be used to renew it and query its state. -- Peter de Jong, Microsoft.

Will C# have an IDE- based interface like VC++.NET and VB.NET in Visual Studio.NET?

All Visual Studio.NET languages use the same IDE. C# is part of Visual Studio.NET, so you'll use the same IDE as you'll use for Visual C++.NET and Visual Basic.NET, among others. -- Robert Scoble.

Yes, C# will be a full participant in the IDE. -- Brian Harry, Microsoft.

What is the difference between "Managed Code" and "Unmanaged Code" in Visual C++.NET?

I just wrote another posting on managed C++ and I'm going to add some more info here. First, don't take anything I say below wrong. GC is an important part of the CLR and is key to the language integration features and much of the simplified programming and security. That said, the CLR provides many things. Some of them (for example simplified deployment) have nothing whatsoever to do with GC.

Managed C++ allows you to compile existing C++ programs to managed code. We have compiled some very large programs like our existing C++ backend (and we are working on Microsoft Word now). They run fine as managed code and never allocate a simgle GC'd object. The CLR provides complete support for both managed and unmanaged memory. A quick side bar on terminology:

Managed vs Unmanaged memory - managed memory is memory that is allocated on the GC heap(s) and will be automatically freed when it is no longer reachable by the program. Unmanaged memory is any memory not on the GC heap(s). Allocation and freeing a performed via other mechanisms and are generally presumed to be explicit.

Managed vs Unmanaged code - managed code is generally a pre-requisite for using managed memory. It is code that has enough meta-data to allow its threads to be stopped at appropriate points, its stacks to be walked, its code identity to be determined, its root references to be identified and its types described. While it is a pre-requisite for managed memory, the relationship does not go the other way. Managed code can take advantage of the type descriptions and other things without using any GC references. Using Managed C++ or other languages like it, you can manage all of your memory yourself or manage most of it and sprinkle in only GC'd objects where you want them. In fact, using C#'s unsafe blocks, you can actually do quite a lot with pointers and custom memory management.

I'd also like to point out that, right now, not all C++ code can be compiled as managed code. A few examples are: inline assembly, code using setjmp and longjmp and a few other constructs. Most of these are schedule issues and there is no fundemental barrier. The managed C++ compiler handles this quite well by compiling certain routines in your program to unmanaged code and setting up the necessary transitions for you automatically.

Again don't get me wrong - to avoid GC'd objects is to avoid much of the value of the CLR. However, nothing in the CLR forces you to use it. In practice as we move forward, I expect most memory will be managed by the GC and we will use the CLR's unmanaged memory capabilities for the relatively rare occasions where unmanaged memory is more appropriate. -- Brian Harry, Microsoft.

I must say I still find this confusing. -- Andy McMullan.

Andy wrote: My understanding (particularly from posts by Brian Harry) is as follows:

1. Managed code (e.g. managed C++) is not forced to use managed memory - use of GC can be optional (and is optional in C++). Deterministic destruction in C++ can therefore be preserved in managed C++ (see the long thread started by Chris Sells on this subject).

[Ronald Laeremans of Microsoft, answers]: These unmanaged classes are not seen as classes in the CLR sense. That is to other CLR languages they don't look like classes. Which is logical, since these languages do not understand the semantics associated with these classes at all. Among other things the lifetime rules.

2. Managed C++ is allowed to use almost all the nasty, dangerous C++-isms that we've grown to love, like pointers. There are only a few fringe things (like in-line assembly) that managed C++ cannot cope with.

[Ronald Laeremans of Microsoft, answers]: Managed Extensions to C++ do allow you to use pointer. They allow all the pointer use you know from C++ on unmanaged data, they allow a superset of the functionality exposed by C# and "unsafe"C# features on managed pointers, i.e. pointers to managed data. You can use inline assembly, varargs functions and setjmp in a program compiled with the /com+ (managed code) switch, but the functions containing these constructs will be automagically compiled as unmanaged (i.e. native X86) code by the compiler. The only things that I am aware of that cannot be compiled at all are unprototyped functions and types with conflicting definitions in different translation units.

If this is true, it seems like I can get many (most?) of the benefits of .NET, including (crucially) access to the CLR class library, by writing managed C++ that doesn't use the garbage collector.

[Ronald Laeremans of Microsoft, answers]: On the interface boundary with the frameworks (class library) you will have to use managed types. All the class library works with managed data only.

So, an answer to your question of "If every language is equal in the eyes of .NET, then why go for the inelegance of managed C++, when you can have C#?" might be "because I want to get access to the CLR, but I don't want to give up features like deterministric destruction, templates, STL etc". Is this reasonable?

[Ronald Laeremans of Microsoft, answers]: Yes, this is certainly reasonable. And I (personally) expect a reasonably sized developer contingent that will want to keep using unmanaged data for a large part of their applications targeting the .Net framework. Apart from protection of investment and porting, this scenario was a main driving force for us to do Managed Extensions to C++. If you have access to the PDC bits, I would urge you to see how close we got to the goal of making this a compelling and productive language.

Don't get me wrong, I'm not a C++ zealot - far from it, I think the language has many problems. But I'd really like to get to the bottom of whether staying with C++ in the .NET world is a realistic proposition or not. -- Andy McMullan.

[Ronald Laeremans of Microsoft, answers]: As Don Box said, it is the community at large that will decide this. We have worked and are working very hard to make it a compelling proposition for code that uses unmanaged data for a substantial part or code that heavily interacts with unmanaged code (yours, third party or Win32 APIs) where ME to C++ is by far the easier language to use. And to interact with existing libraries that expose only a C++ API it is the only realistic choice. Lastly, there is the fact that for extremely performance- sensitive applications, the ability to transparently compile some code as unmanaged could eke out the performance boost you need.

In short, I think that the fact that the CLS supports multiple languages is going to be an incredible strength. I personally would never consider a single language an adequate toolset to address all the challenges a development team is faced with. If it were, that software development would probably be the first engineering discipline (or craft, I am still trying to decide what it is closest to) that could get by with a single tool fits all solution. Again personally, with my background and my preferences, I would not write 100% of my code with ME, not would I write 100% of it with C#.

I wanted to make sure I got all my acronyms straight:

VOS: Virtual Object System
VES: Virtual Execution System
CLS: Common Language Specification
CLR: Common Language Runtime
CLI: Common Language Infrastructure

The VES is an implementation of VOS. The CLR is an implementation of the CLS. The CLS is a subset of the VOS, thus making the CLR a subset of the VES. The CLI = CLS (I am not sure about this one).

VOS is a dead term it is now the Common Type System (CTS). I don't think VES is still used but I'm not 100% sure. The rest are right. I would say that the CLR implementes the CTS. The CLS are unenforced rules. It is a specification that compiler vendors must adhere to in order to ensure that types created with that language can interoperate with other compilers. I'm not exactly sure what MS intends for CLI. I have heard this term but it is very infrequently used. -- Jeffrey Richter.

The Spec for the CLS (Common Language Specification) is found as part of "Technical Overview of the NGWS Runtime". The rules are spelled out clearly there... also note that the CSharp compiler enforces these rules when you use the [assemby:CLSCompliant(true)] custom attribute. VB will check these rules as well in a future release. -- Brad Abrahms, Microsoft.

Shockingly close. I'm impressed you got so much of it straight. The CLR is a superset of the VES (it includes additional stuff like debugging and profiling services, COM interop, etc). The CLI is the subset of the CLR and class libraries necessary to support C#. -- Brian Harry, Microsoft.

Is it true that .net remoting cannot be used in systems that involve firewalls?

If there is a firewall involved then calling from the server to the client is not the appropriate solution. .NET Remoting dynamically builds and tears down the connection. -- Jonathan Hawkins, Microsoft.

That is not true. .net remoting can use SOAP over http and will go through firewalls just fine. -- Brian Harry, Microsoft.

Brian, The issue came up with the server calling back a client via an client object that got passed to it. In that case, our understanding is that the ObjRef of the client object requires that the client be an HTTP server, which isn't going to work through typical firewalls. Really, the only way to make things work over the typical firewalls is if the client initiates all actions, including what may be disguised as a server calling the client. -- Kenneth Kasajian.

Yes, in that case, you are right. Call backs like that ususally won't work through a firewall, but it works for basic call/return just fine. I don't know of anything that will give you callbacks through a default firewall configuration - it's not really a limitation of the .net remoting. You would have to do polling. -- Brian Harry, Microsoft.

If I call an API function which returns to me a function pointer can I call through that function pointer somehow? In C, I simply dereference the pointer and call through it. In VB, you can't do this out of the box. Anyone know if there's a way to do this using P/ Invoke stuff?

Function pointers can be marshaled through PInvoke as delegates. A delegate can be used from any of the languages (including VB). So it is very easy to pass a function pointer to or retrieve one from a Win32 API and call it. -- Brian Harry, Microsoft.

Will a standard EXE compiled in Visual Basic.NET run on Windows 95, 98 or ME? On Windows NT 4.0? Or will it only run on Windows 2000?

When we ship, we will provide a redist that will work on many down stream OSs (not just windows 2000). Any exe built with VB .NET will work on any machine with this redist installed. -- Brad Abrahms, Microsoft.

We will be supporting all of the platforms you listed. -- Brian Harry, Microsoft.

Thanks. Will this also be true of ActiveX DLLs and EXEs ... that is, will COM components written in VB 6 and VB.NET be able to create and call one another through the redist layer?

Yes. -- Brian Harry, Microsoft.

If COM/COM+ is not carried forward into .Net technologies, will Don Box have to change his license-plate from "IUnknown" to something else?

Where did you get that COM and COM+ are not being carried forward into .Net technologies?


I own COM and COM+. Brian Harry owns the CLR. Both of us have been saying as loud as we possibly can that this is absolutely 100% *wrong*.

I know the message was somewhat muddled at the PDC (if it wasn't we wouldn't have to be saying it over and over again! )

The problem is the definition of COM. Here is a summary of how COM/COM+ are affected by the .Net Framework.

COM and COM+ are many different things. It is a low level API based infrastructure that consists of APIs like CoSetProxyBlanket, AddRef, Release, QueryInterface, etc. It is a programming model that revolves around creating objects and calling methods on those objects (procedure oriented). And it is a set of services that sit on top of both of these things that apply and provide behavior to the objects such as synchronization, pooling, transactions, etc. Over time, the .Net framework will replace the underlying goo. Everybody agrees that this is good.

However, the programming model is here to stay. The way it is implemented might change -- for example, the procedural programming model might be built on top of a messaging layer, which programmers also might have access to. I believe the installed base of applications that depend on this type of programming (e.g. VBScript for Office, ASP pages, etc.) and the familiarity that programmers have with it means that this is here to stay. The services are definitely here to stay. The requirements for them are not going to go away -- anybody who writes a business app will likely need to use these services when they use the .NET framework. Our goal is to make accessing the services from the .Net framework as seamless and as easy as possible. We've made great strides for this in V1 of the .Net framework - there are custom attributes for virtually all of the services, we do automatic registration when unregistered components are created, etc.

We plan to make this even better in subsequent releases of the products. We recognize that some of the benefits of the .Net Framework (like XCOPY install/uninstall, automatic memory management, etc) are currently reduced when using some of the COM+ services inside the .Net Framework. However this is a function of schedule and not intent. We will be working diligently to integrate these services with the .Net infrastructure and create a seamless experience for our customers to use the whole of the .Net platform. It is important to understand that we are not throwing out the old and bringing in the new. We are refocusing our platform to empower the next generation of the web. In doing so we are leveraging all of the assets we currently have and the existing COM+ features are key to this platform.

The DCOM protocol is an interesting issue. Clearly our story for the Internet is HTTP and SOAP. Some intranet scenarios will use this too because there is increasing use of firewalls to partition corporate Internets. That said, DCOM is a high performance binary remoting protocol and is valuable in some tightly coupled scenarios. In fact the common language runtime object remoting services use DCOM for some remoting. We will have to see how this evolves in the future, but there will likely always be a need to a higher performance protocol than HTTP and XML.

If you have any more questions about this, I'm happy to answer them. -- Joe Long, Microsoft.

Source Code (VB or VC or C#) -- compile --> MIL -- JIT compile (at runtime) --> native code --> run Is this correct? Also what does MIL stand for? And though I have a foggy idea, it would be wonderful if someone could explain the significance of MIL, CLS, CTS and CLR (I found earlier postings, but still can't seem to get the complete picture)

source code (VB, VC, C#, Cobol, Perl, ...) -> MSIL (Microsoft Intermediate Language) -> compiler (might be runtime or might be static - sometimes refered to as pre-jit) -> native code -> run.

CLS - Common Language Specification: A set of constructs and constraints that serve as a guideline for library writers and compiler writers that allow libraries to be fully usable from any language supporting the CLS and for those languages to integrate with each other.

CTS - Common Type System: The type system supported by the CLR. It is a superset of the CLS and is designed to support additional language features that are not appropriate for language interop.

CLR - Common Language Runtime: An implementation of the CTS and additional services that allows you to execute .Net Framework applications. -- Brian Harry, Microsoft.

Not sure where the M came from on MIL, but the IL is standard Intermediate Language. It goes like this.

The source is written in c#,vb,cobol etc... it is then compiled on command or when first executed to standard intermediate language. The runtime framework then creates the final binary. This final compiled IL code is used for all requests until the source is changed at which time the cached version of the code is invalidated and the process starts over. -- Mike Fleckenstein.

MIL = (Microsoft) Intermediate Language, a CPU independent instruction set.

IL contains instructions for load/storing/initializing and calling methods on objects etc.

So, all languages compile into IL and uses the CTS (Common Type System), and if they need to make the code cross language they follow the CLS subset of CTS (that makes it possible to access the libs from every language).

So, every language makes IL and uses the same type system. This make it possible to have inherits between different languages because from the runtime point of view they are the same language and of course using the same types (CTS).

This makes it possible to have code written in C++ and use it from C#, VB or your language of choice.

CLR is the common language runtime, the mother to all good. (or evil)


So CTS resides in the CLR and CLS is a sub-part of the CTS.

The CLR (Execution Engine) handles (+more):

· Code management
· Software memory isolation
· Verification of the typesafety of IL
· Conversion of IL to native code
· Loading and execution of managed code (IL or native)
· Accessing metadata (enhanced type information)
· Managing memory for managed objects
· Insertion and execution of security checks
· Handling exceptions, including cross-language exceptions
· Interoperation between NGWS objects and COM objects
· Automation of object layout for late binding
· Supporting developer services (profiling, debugging, etc.)

-- Patrik Torstensson.

Do you know which STL/version is going to be shipped with VC7?

A completely up to date version licensed from Dinkumware. If you have the PDC bits, you can look at most of the core. However we are still hard at work at dramatically improving the documentation. The source also has significantly improved in readability. We are confident that for the STL library we will be shipping one of the more conformant implementations in the industry. Ronald Laeremans, Microsoft.

Does VC.NET support ANSI/ISO C++? Can VC.NET use Blitz++? http://

VC.Net will not, I am sad to report. We are addressing compliance issues with both the compiler and with the standard library in VC.Net, some of the enhancements are in the bits distributed at the PDC, some we are still working on. However we were not able to do all the work needed to compile Blitz or its competitors that are based on expression templates in this release. E.g. we did not have sufficient time/resources to implement partial specialization. We did significant work on the standard library and on improved diagnostics for template errors, one of the main causes of frustration and hair pulling for many C++ developers trying to use templates. We are looking very hard at what it takes to fully support expression templates and other advanced template-based libraries for the next release. Ronald Laeremans, Microsoft.

Anyone know how many ASP programmers there are?

We have some internal numbers that point to upward of 1 million. -- John Montgomery, Microsoft.

I'm getting ready to build a Windows 2000 Server with SQL Server 2000 and Visual Studio.NET on it. Does anyone know of any particular order I should install the db and VS.NET after I get W2kserv up and going? I also have the Windows Update Internet Pak, Host Integration 2000, BizTalk 2000 Server, Application Center, Internet Security & Acceleration Server 2000. Aside from the Windows Update Internet Pak do I really need any of the other pieces of software to build a test web site on this box?

What I've done:

- W2K Server
- CdRom of component update (from PDC CDs), this will install the SDK
- Visual Studio .NET (from PDC CDs)
- Updated the samples and the SDK as described in the .NET SDK Cd in the Updated folder(from PDC CDs)

I did not install any "Internet Update" pack or anything else and (on a AMD 333 Mhz machine with 128 Mb of ram) it works. Slowly but it works (I suspect that you need at least 256 Mb of Ram to be happy).

At this point you have the samples copied, if you run "ConfigureSamples" it will install the MSDE database, if you need to install the full SQL Server 2000 you'll have to manually configure them, I suppose installing SQL Server 2000 now and then configuring and building the samples after.

Keep in mind that you can also download the IBuySpy samples from the site, it will install without problems if you have SQL Server 2000 installed, if you have the Samples with MSDE installed you need to change the Bat file creating Database to use OSQL instead of ISQL, beside that I've installed both the C# and VB version of IBuySpy on MSDE without any problem. I would suggest avoid installing other software on the machine. -- Gentilini Massimo.

If I write a COM+ app today can I move it to the the .NET platform and use the services that .NET provides (ex: let a new .NET class inherit from my COM+ class) without having to rewrite or make mondo changes in my COM+ code? Am I gonna be stuck with a "legacy" com+ app a year from now and have to use some poor performing connector from the .NET to COM world? MS could provide some specs as to how to design apps today so they can be ported to the .NET world once everything is released. Or can this even be done? This is one problem I have had with MS is they jump around a lot without laying a solid foundation in one technology. COM+ has hardly been released for a few months and it is already obsolete.


Could you explain to me why you think COM+ is obsolete? What do you mean by "COM+"?

This is what most people who use COM+ do today. They build their app in VB6. They compile it into a DLL. The configure it in the COM+/MTS explorer to use the services they need to use.

This is how most people are going to use COM+ when .Net comes out. They'll build their app in VB.NET (or C#). They'll compile it into a DLL. Instead of configuring it in the explorer, most people will simply put the attributes in the class or method indicating what services they want to use. The COM+ Services that you use with .Net will be *exactly* the same services you use today.

As far as migration goes, this is largely a language issue (in fact it could be argued that it is nothing but a language issue). One big issue is deterministic finalization...but you know what? This is why I made it a requirement that JIT be used when transactions are used in COM+. As long as you call SetComplete/SetAbort (or mark your method as an AutoDone method) when your component is supposed to be deactivated, you don't have to worry about deterministic finalization (the context will be GC'd, but the object will be deactivated). If the tool to migrate VB code is as good as I think it will be, your investment in VB 6/COM+ today will move forward into the .Net world w/o very much pain at all. And no, you won't be able to inherit from your binary COM+ class (which, btw, doesn't make a lot of sense to me - I think of them as "COM+ _Components_" because to me, classes are about source and components are about binaries)....but if you've migrated your code, you can certainly use the language features you are used to.

As far as some "poor performing com connector code" goes, I think we need to be really careful here. What makes you think it's going to be poor performing? What are you comparing the performance against? I'd argue the right comparison is VB 6 vs. VB .NET. I fully expect VB .NET to not only perform better, but scale better as well. VB .NET won't have any of the "crust" that built up over time in the VB 6 runtime. The simple fact that VB .NET objects do not have to run on STA threads will make an enormous difference for performance and scalability.

As far as Microsoft "jumping around a lot w/o laying a foundation", I'd argue that this is exactly what we are not doing. My team is the same team that built MTS 1.0. and MTS 2.0. and COM+ 1.0. and we are the team that will be building the service infrastructure for the .Net world. We will be evolving and innovating on the same code base.

Thanks for listening. -- Joe Long, Microsoft.

For my taste, you are painting a picture that is far too rosy (understandably). From my perspective, things look a little bit grayer.

In 1997, MS announces a major overhaul to COM that at least in spirit is somewhat similar to .NET.

In 1998, MS announces that we should forget everything that was said in 1997 (watch the Mary Kirtland presentation from PDC 1998) but that we will get COM+ with Windows 2000 which is still a MAJOR upgrade.

In 2000, Win2K ships with COM+ but apart from a "Readiness Kit", there is very little tool support. It is very strange that the most important upgrade in MS operating systems does not even lead to a 6.1 upgrade in the tools.

Less than half a year later, MS announces .NET as the next big thing. MS claims that this is not vaporware and shows samples on its web site but 99.9% of MS developers cannot compile them.

Throughout all these years, MS states that "MFC is not dead" and "ATL is incredibly important."

Do not get me wrong. I have heard very little about .NET but the little I have seen looks very promising. But looking at the above history, I do not believe that you can blame people for saying that MS's message has been confusing. It is strange that one author on the MS web site hails how MS succeeded in keeping .NET secret from the press and that the press still has not really caught on to the importance of what MS is doing. But nobody seems to realize that if the press is in the dark, a lot of developers might be as well. -- Olaf Westheider.

If I understand you correctly - it seems that you expect a linear, continuous evolution of COM technologies over the years. But this is not possible. Why? Since from the beginning, COM was in fact a name for a set of technologies (OLE, classic COM, automation, connection points, ActiveX, DCOM, MTS, MSMQ, COM+ events, queued components, etc. etc). There are some of them that hardly have something in common (like OLE and MSMQ for example), still they are viewed as part of the COM). What people tend to forget is that COM is not a real, well-definite entity - it is a set of things that evolve separately. Therefore the confusion that "Why COM does not evolve in clear and well definite steps?"

As an example: you mention that the COM+ 1.0 didn't bring an upgrade of the programming tools. But I am wondering what upgrade is necessary since it talks more about services. All tools that you need were already present. You don't need nothing special in VC++ in order to use COM+ events or queued components, for example.

Maybe it is better if you view the evolution of COM better if you make the following distinction:
1) The set of technologies that offers the interoperability standard. This is the "glue" that keeps everything together. Right now it is classical COM, DCOM and automation (and transactions, starting with COM+ 1.0). And this "glue" is evolve into .NET

2) The set of functionalities offered on top of this infrastructure (like MSMQ, events) that are expected to stay in the same form. -- Adi Oltean, Microsoft.

What do you mean by the argument about abstract base classes vs. interfaces? C# looks to me to have a well-defined concept of "interface", and most C++ programmers are comfortable with C++ abstract base classes as interfaces.

In C++, there is no formalism for interfaces, rather the convention of "I know one when I see one" that we use in COM/C++ programming. In type systems like COM, Java or CLR, there is a formal distinction between class and interface. In COM, classes were just loadable units of code that implemented one or more interfaces. In java and CLR, things are different. There are two ways of defining a collection of abstract methods in CLR and Java: interfaces and abstract classes. Consider the following Java code

(porting this to C# is left as an exercise...):

interface ICalculator

interface ICalculator
double Add(double x, double y);
double Multiply(double x, double y);

abstract class CCalculator
abstract public double Add(double x, double y);
abstract public double Multiply(double x, double y);

Both types express a contract between the client and the implementation. Both types contain no implementation goo (in this case).

So, what is the difference?

For one, consider the following extentions to ICalculator:

interface ICalculatorWithDiv extends ICalculator {
double Divide(double x, double y);

interface ICalculatorWithExp extends ICalculator {
double Exp(double x, double y);

A given implementation can implement both extensions as follows:

class FancyCalc implements ICalculatorWithDiv, ICalculatorWithExp {
// implement methods here...

Now consider the same extentions rendered against the abstract class:

class CCalculatorWithDiv extends CCalculator {
abstract public double Divide(double x, double y);

class CCalculatorWithExp extends CCalculator {
abstract public double Exp(double x, double y);

Unfortunately, since CLR and Java only support a single base class, there is no way to implement a single object that exposes both contracts. That is, the following is illegal:

class FancyCalc extends CCalculatorWithDiv, CCalculatorWithExp
// implement methods here...

So, what are the arguments in favor of abstract classes?

For one, you can put implementation goo into the contract. For example:

abstract class CCalculatorWithStuff {
abstract public double Add(double x, double y);
abstract public double Multiply(double x, double y);

public boolean hasOverflowed() { return m_bOverflowed; }
private boolean m_bOverflowed = false;
virtual protected void onOverflow() { }
protected void overflow() { m_bOverflowed = true; onOverflow(); }

This is obviously no longer simply a contract. Whether that is a feature or a bug depends on whether you believe in the benefits of design by contract or not. I do, so to me, this abstract class is bad software. Another key distinction is that one can provide default implementations of methods with abstract classes but not interfaces. There are two scenarios where this is interesting: the "SAX" case and the "IFoo2" case. If you look at the SAX interface suite, each contract is defined using a Java interface. To ease the burden on developers, there is an "Impl" class that provides no-op implementations of each of the methods:

interface IContentHandler {
void startDocument();
void endDocument();
void startElement(...);
void endElement(...);
// remaining methods elided for clarity

interface IDTDHandler {
void notationDecl(...);
void unparsedEntityDecl(...);

class DefaultHandler implements IContentHandler, IDTDHandler, IErrorHandler,
IEntityResolver {
public void startDocument() {}
public void endDocument() {}
public void startElement(...) {}
public void endElement(...) {}
// remaining methods elided for clarity

Had IContentHandler et al been expressed as an abstract class instead of as an interface, default method implementations could have been provided. HOWEVER, no single class could have implemented both IContentHandler and IDTDHandler. So, here's a great example of where abstract classes as contracts don't make sense. The other scenario where abstract classes as contracts seem useful is for evolution. Consider the following interface (and implementation):

interface IFoo // version 1
void doFoo();

class Foo implements IFoo {
public doFoo() { }

Consider what happens when the designer of the IFoo contract wants to add
another method:

interface IFoo // version 2
void doFoo();
void doBar();

Unless the Foo class is modified, there are "issues" when the class is loaded, since it no longer implements all of the operations of IFoo (it did when it was compiled, however, since at that time there was only one method in IFoo). In Java 1.1.x, the loading of Foo would cause an error. Because so many Sun classes didn't use interfaces (interfaces were added relatively late to the Java language after some key libraries were already done), Java 1.2.x added support for loading the class anyway, and throwing a well-known exception when you tried to call the missing method. This flies in the face of both COM and CORBA, where public/shared types need to be immutable.

The availability of abstract classes give you an out to the immutability issue. Had IFoo been expressed as an abstract class:

abstract class IFoo
public abstract void doFoo();

class Foo extends IFoo
public void doFoo() {... }

The second version of the contract could have provided a default implementation for the new method, as follows:

abstract class IFoo // version 2
public abstract void doFoo();
public void doBar() { }

Now, when the old Foo class loads, it gets a default implementation of doBar, so no load-time or invocation-time error will occur. The question is, is this a feature or a bug? As far as I can tell, the answer is both. It feels like a feature, however, I believe that it requires even more discipline to get right, since the new contract may or may not be compatible with the old contract. -- Don Box.

I do see your point, but don't believe abstract classes are generally bad. JAVA and CLR view interfaces as a way to adapt the type of a class to something else. Implementing Runnable on a Java class for example, changes the class type to something you can pass as an argument to a Thread object by forcing you to implement the "run" method. I view this in a slightly different light as providing a String and StringImpl class where the former provides the contract and the latter the implementation.

In essence, what I am saying is that all viewpoints expressed in this discussion are valid, but is it really worth debating if CLR and COM is the same. CLR does have slightly different semantics as COM but at the same time you can use it more or less like COM if you so desire. The important question is, should you really be doing this? I am of the opinion that many developers (especially ones migrating from Java and C++) will use this stuff more like one would program Java rather than COM and the COM developers will approach the task somewhat differently. In the end both are right. -- Piet Obermeyer.

So do you see ATL going away in favor of C#?

ATL will still be a great tool to build COM components in unmanaged C++. C# will probably become the preferred language (for C++ programmers) to write managed code. -- Pascal Bourque.

For people writing unmanaged code, I think ATL (and WTL) will still be the way to go. For people writing managed code, ATL becomes largely irrelevant. So, if the world stops writing unmanaged code tomorrow, then ATL will join MFC in the mothball heap and will exist simply to keep the installed base working. If the world continues to write unmanaged code, then ATL will flourish.

The more interesting question is whether the C++ programming community will use managed C++ or C# for writing managed code. Too early to tell on both fronts. Luckily, the community at large gets to make the choice. -- Don Box.

The point is, of course, that ATL has nothing to do with managed code. My opinion - and I think Don has voiced this on this group too - is that C# is the natural language for managed code (to answer the title, if every language is equal in the eyes of .NET, then why go for the inelegance of managed C++, when you can have C#?). I think that this kinda makes C++ development redundant if you believe in managed code. A scan through the common class library shows that Microsoft have just about covered everything - with a few exceptions of course. -- Richard Grimes.

Is there any way to get the ThreadId of the currently running thread i.e. Thread.CurrentThread.?


;-) -- Patrik Torstensson.

Your point is taken... we didn't make it very easy to obtain the threadid from a managed thread... that is because in general, we don't want to encourage people to obtain the ThreadId of a thread because it limits our ability in the future to break the 1:1 correspondence between managed threads and OS threads (e.g. implementing fibers in the runtime). So, I'd like to understand where you guys are using the threadid? I suspect you are passing it out to unmanaged code? -- Brad Abrams, Microsoft.

Follow up to Brad's mail:

If you are retrieving the threadId to have a unique identifier for tracking purposes then a better way is to use the hash code from the thread object e.g. int hash = Thread.CurrentThread.GetHashCode(); Additionally you can tag a string name to a thread e.g. Thread.CurrentThread.Name = "Bob";

Here is an example:

using System;
using System.Threading;

public class Alpha
// The method that will be called when the thread is started
public void Beta()
int hash = Thread.CurrentThread.GetHashCode();
String name = Thread.CurrentThread.Name;
Console.WriteLine("Alpha.Beta is running in its own thread");
Console.WriteLine("Thread Hash : {0}", hash);
Console.WriteLine("Thread Name : {0}", name);

public class Simple
public static int Main(String[] args)
Console.WriteLine("Thread Simple Sample");
Alpha oAlpha = new Alpha();

// Create the thread object, passing in the Alpha.Beta
// via a ThreadStart delegate
Thread oThread = new Thread(new

// Set the thread name
oThread.Name = "Bob";

// Start the thread

return 0;

--Jonathan Hawkins, Microsoft.

Or you could just prototype the Win32 call as follows:

[ dllimport("kernel32.dll" ]

static extern int GetCurrentThreadId();

--Don Box.

Still very little comment from anyone at MS about deterministic destruction. Any comments guys?

You can pass structs by reference. In C# you just say "ref" on the parameter list.

Personally, I don't miss deterministic destruction, since I can always implement it if I need it. Perhaps a constructive sugggestion would be for folks to specify the requirements of a system that they need. Don't just view the problem as "C++ destructors" but as a system of resource management with deterministic control points.

Remember that malloc/free is still there if you need to use it. -- Brad Merrill, Microsoft

No comments: