Monthly Archives: February 2004

Applied Decentralization: A large-scale social system for HLS

It's been a few months since I've posted – a very busy and exciting time here at Groove. Both in terms of what's been happening in the business and market, but also because we're closing in on the first beta of Groove V3. I can't wait to tell you about the improvements in V3 … because after having used it day in and day out for a few months now, I've simply never felt nearly this excited about a product that I've worked on. And that says a lot. More on V3 in a few weeks!

For those of you who have been following Groove for quite some time, you may recall that the product's original raison d'être was to enable people “at the edge” to dynamically assemble online into secure virtual workspaces, to work together and to get something done, even if those individuals were in different organizations with completely different IT infrastructure.

Today, with the gracious permission of one of our most significant customers, Groove made an announcement that I'd like to talk about for a moment. It's very significant to me for two reasons: First, the nature of how Groove is being used in this solution demonstrates to the extreme the very reason why Groove was built the way it was, from a technology and architecture perspective. Decentralization at its finest. The customer's core challenge was to enable individuals from many, many different organizations – most of whom had little or no opportunity for training – to rapidly assemble into small virtual teams to selectively share information, make decisions, get the job done, and disassemble. The individuals are geographically dispersed. They use different kinds of networks, behind different organizations' firewalls and management policies. They are very, very highly mobile. And there are few applications where the requirement for deep and effective security is more self-evident.

Groove's press release can be found here

Applied Decentralization: A large-scale social system for HLS

It's been a few months since I've posted – a very busy and exciting time here at Groove. Both in terms of what's been happening in the business and market, but also because we're closing in on the first beta of Groove V3. I can't wait to tell you about the improvements in V3 … because after having used it day in and day out for a few months now, I've simply never felt nearly this excited about a product that I've worked on. And that says a lot. More on V3 in a few weeks!

For those of you who have been following Groove for quite some time, you may recall that the product's original raison d'être was to enable people “at the edge” to dynamically assemble online into secure virtual workspaces, to work together and to get something done, even if those individuals were in different organizations with completely different IT infrastructure.

Today, with the gracious permission of one of our most significant customers, Groove made an announcement that I'd like to talk about for a moment. It's very significant to me for two reasons: First, the nature of how Groove is being used in this solution demonstrates to the extreme the very reason why Groove was built the way it was, from a technology and architecture perspective. Decentralization at its finest. The customer's core challenge was to enable individuals from many, many different organizations – most of whom had little or no opportunity for training – to rapidly assemble into small virtual teams to selectively share information, make decisions, get the job done, and disassemble. The individuals are geographically dispersed. They use different kinds of networks, behind different organizations' firewalls and management policies. They are very, very highly mobile. And there are few applications where the requirement for deep and effective security is more self-evident.

Groove's press release can be found here.

The Department of Homeland Security's press releases related to HSIN can be found here and here, while Secretary Ridge's remarks are here.

Why was a decentralized architecture for this network so fundamentally important, and thus why was Groove uniquely suited for the task? This brings me to the second reason that I'm tremendously pleased to have had the opportunity to contribute to solving this problem. Larry Lessig taught us that in software-based systems in cyberspace, the code can define outcomes – inadvertently or intentionally – that might have an impact on society. Or better stated in this case, the system's core architectural design principles have a real impact not only on the system's mission effectiveness, but also in how it might effectively preserve and protect rights.

To understand these issues more deeply, one need look no further than the eloquent work released this past December by the Markle Foundation Task Force on National Security in the Information Age, called “Creating a Trusted Network for Homeland Security“.

If you're interested in the “why” of decentralization, read the report. Look at the members of the task force. And take particular note of their proposed SHARE network and its architecture. (Interestingly, Richard Eckel wrote here about it in his blog before he became aware of the details of Groove's involvement with HSIN.)

Lots of stuff here to read, but it's truly fascinating if you are interested in understanding how decentralization and peer-to-peer technology is having a real impact on government and society.

Although so, so many people are involved in this project because of its scope, in particular I'd like to recognize Col. Tom Marenic, Pat Duecy, Ed Manavian, and especially our partner Mike Kushin of ManTech/IDS. My sincere thanks for your leadership, your passion about the mission, and your appreciation for organizational dynamics, social dynamics, technology and architecture in assembling a large and empirically effective system for purposeful social interaction. [Ray Ozzie's Weblog]

The 1060 REST microkernel and XML app server

The 1060 REST microkernel and XML app server. 1060 NetKernel Suhail Ahmed alerted me, via email, to a really interesting project called NetKernel, from 1060 Research. The docs describe it as “a commercial open-source realisation of the HP Dexter project.” Here's the skinny:

Today's Web-servers and Application Servers have a relatively flat interface which creates a hard boundary between Web and non-Web. This boundary defines the zone of URI addressable resources.

What if the REST interface (URI address space) didn't end at the edge of your external interface?

NetKernel uses REST-like service interfaces for all software components. The services are fully encapsulated in modules which export a public URI address space. A module may import other module's address spaces, in this way service libraries may be combined into applications. [NetKernel Essentials]

What if, indeed? I downloaded the 20MB NetKernel JAR file, installed the system, and took it for a spin. Fascinating concept. As advertised, it offers a suite of XML services — including XSLT, and the Saxon implementation of XQuery — in a composable architecture based on URIs. These include the familiar http: and file: plus NetKernel's own active: which is a URI scheme for NetKernel processes scheduled by the “REST microkernel.” [Jon's Radio]

Exchange 2003 and Active Directory

Exchange 2003 and Active Directory. Chapter one of Steve Bryant's free eBook “The Expert's Guide for Exchange 2003: Preparing for, Moving to, and Supporting Exchange Server 2003” has been published over at Windows & .Net Magazine.

“This eBook will educate Exchange administrators and systems managers on how to best approach the migration and overall management of an Exchange 2003 environment. The book will focus on core issues such as configuration management, accounting, and monitoring performance with an eye toward migration, consolidation, security and management.”

[MS Exchange Blog]

Anti-virus built in to XP SP2?

Anti-virus built in to XP SP2?. As Dana pointed out, this is an interesting article about Microsoft adding a built-in virus scanner to Windows XP SP2. I'm torn, obviously. On one hand, for all those home users who never seem to be able to use AV properly, maybe this isn't such a bad thing. Although many still won't remember to update it, is it going to be automatically self-updating? If so, is that necessarily a good thing? I can see arguments either way. On the other, is it going to be like the firewall or zip utility that's built in to XP now, where they aren't as good as other products on the market? And is that going to force out the better products?

I also need more information on these incompatibilities with SP2 and other AV products. There's no way the built-in virus scanner could replace all the functionality I get from Symantec Corporate Edition, but if Symantec won't even work with SP2, I'm not going to be installing the service pack to begin with until that's fixed. I get the feeling this is going to bear keeping a close eye on over the next few weeks and months.

Update: Some more links Re: SP2 details. [Life of a one-man IT department]

Exchange Server 2003 Security Hardening Guide

Exchange Server 2003 Security Hardening Guide.  “This guide is designed to provide you with essential information about how to harden your Microsoft® Exchange Server 2003 environment. In addition to practical, hands-on configuration recommendations, this guide includes strategies for combating spam, viruses, and other external threats to your Exchange 2003 messaging system” [MS Exchange Blog]

Does more productive Visual Studio mean fewer IT jobs?

Does more productive Visual Studio mean fewer IT jobs?.

Hmmm, Darcy Burner takes on Jim Fawcette after he wrote that the increased programmer productivity that tools like Visual Studio brings is what is causing developers to get laid off.

I too disagree with my old boss. At Demo last week VCs were telling me that they are having trouble finding good programmers again that knew .NET (many of the new products/services shown there were done in .NET). As we get closer to Longhorn (yeah, it still is a long ways off) you'll see that economic pressure increase too on .NET sides of things.

The Almond/Pistachio processing factory I visited at Christmas time is a good example. The whole factory is run on .NET. They now employ more programmers than they did before. Why? Because the work is changing from one where manual labor does the tasks to one where programmers can squeeze more efficiencies out of new machines and processes (and offer new kinds of products and new quality levels).

Every CEO I talk to is planning on hiring more programmers, not fewer. Look at REI's CEO that I met on the plane. He has a team of programmers working on building a “store of the future.” That's for a sporting goods store. He thinks programmers will let him outrun his competition. And, the fact that he can get more productivity out of each programmer makes it MORE LIKELY he'll hire more programmers.  [Scobleizer: Microsoft Geek Blogger]