Yes, and to be honest, the issue we hear with that is that it's a little too fast. Some of the larger ISVs just need more time. Enterprises require stable OS platforms. Is that contributing to the zero demand?
There's nothing there that they've identified that they need for their business at this time. They need stability. They need the threading capability, the network stack, the performance. The same with ISVs. There's no new capability enablements in 2.6 that aren't already there.
The reality is, let's do it right. It's a good step forward for the design of the Linux kernel. I do expect the storage management stack to get a bit more scalable; right now it's not hitting any limits that anyone is bumping into. That's one area as people do bigger SAN deployments and hit the terabyte limit that file systems have today with 2.4. 2.6 will be past that.
The worst thing would be to rush it out and have somebody have such a bad experience. It's not even close to done. Red Hat's not a company that solely takes open source stuff and builds a product around it. We're the developers of that. Our guys have huge worklists of stuff to get done for 2.6 to be viable. It doesn't work on half the platforms we support. There's no asynchronous I/O yet which Oracle needs to run. So there's a lot of things that are missing that need to be developed.
We're already building full distributions around
More than anything, it frustrates engineers because there's no data for them to see. It hasn't slowed us down. We brought in 3,000 new customers last quarter, so it just shows that customers, though they're aware of it, they're still executing their strategies. SCO CEO Darl McBride has said that Linux would be unrecognizable if the allegedly infringing code would have to be removed. Is there any truth to his statement?
That's so far from reality. Something like this could take a proprietary company down if it were a significant proportion, but there's no way it would ever take down an open-source project.
There's sort of this image of open-source developers being basement-dwelling long-hairs. The reality is that though there may be people who fit that stereotype, but there are people who have worked on and understand the whole lineage of Unix far more than Darl McBride. These guys understand every aspect and the chain, what happened, where code comes from and if it's licensed; there's a deep understanding of the evolution of the code.
And if there happens to be code that's infringing, they'd love to fix it. You look at Andrew Morton himself, the overseer of the 2.6 kernel, he said it would probably take two weeks to replace any amount of code they identified. It might not be two weeks, but McBride's not correct. The unrecognizable statement doesn't make a lot of sense.What kinds of questions are customers asking today? I'm sure the conversations are a lot different than they were even a year ago.
A lot of questions they're asking are around consultation on practical experience, in terms of hardware, scalability, AMD vs. Intel; really a lot of guidance more than anything. They want to be benefactors of other people's successes and mistakes. We're seeing a lot more planning than we have in the past. Also, they're asking us to work closer with them on application integration. So, this is a first step toward virtualization?
It deals with the deployment aspects of provisioning, making servers look alike. It's that first step in a management component. Then you'll see us building the clustering component out which deals with on-demand computing aspects.
With the Sistina acquisition, we have now completely built out the storage-management stack from the drivers, to multipath, to volume management to file systems in a clustered fashion. Those are the aspects that will lead to storage on demand, online changing of volumes, sharing of storage between systems so that multiple systems plug into a network and they all can share the same files and data. That to us was a huge step toward a virtualized world.
Sistina was a huge deal for Red Hat.
We looked at the build-buy conundrum that everyone goes through. They had the strongest open source and Linux team and the best technology talking to customers. They rose to the top pretty quickly. Was last week's provisioning announcement a step toward utility computing for Red Hat?
The areas we're focusing on and building out with our Open Source Architecture [initiative] and the Red Hat Enterprise Linux platform is security, management and virtualization. We consider those huge components of the IT infrastructure that are tightly integrated with the operating system. We want to deliver those open source to drive the cost equation out.
The provisioning module brings the Red Hat Network service out farther where now you can actually do rapid deployments, more scalable deployments. Through the Red Hat Network provisioning module, you essentially configure your system before you even install it. You describe how you want it laid out, what the network topology looks like. And then when systems install themselves, they are pre-configured. Those configuration changes are played across the network.
Are there any holes in the open source development process that this program may help close?
The holes are less [today]. I spent 16 years at a proprietary software company and the review process of their IP was not nearly as stringent as what happens in open source. It's just sort of the eyeball effect you have. It's less of a process and more about the number of eyeballs.
When you think about it, in a proprietary company, there might be only one or two people working on a particular sub-system. When you add to staff, you don't know where their past knowledge has come from. With open source, there's always more than one or two people looking at a particular area. When those people make changes, those changes are made public so it's very obvious to many people if there appears to be an infringement. It's probably the world's biggest review process.
How much enterprise interest are you hearing in terms of a 2.6 version of Linux?
Zero. Literally dead flat, which is good because they're not asking you for a kernel version, they're asking you for a certain capability, feature or performance. The way we engineered Red Hat Enterprise Linux 3 is that the compelling pieces that our customers were asking for that are in 2.6, were maintained in a backport.
So for example in the network stack, IPv6 and the full network stack is consistent. It's the same one in our RHEL3 which is on the 2.4 kernel as it is in 2.6. The threading model too. Some of the compelling pieces of the 2.6 kernel are already in our 2.4 product.
What's interesting to us about 2.6 is that there has been a fair bit of redesign like in the VM system and the I/O system, so you're going to see that it's a better design for engineers to work with. It's more structured. That's what's great about Linux and the evolutionary model. If they decide there's a better scheduler, for example, they'll throw out the old scheduler and put the new one in.
Why not offer indemnification rather than the protection offered by the Open Source Assurance program?
We felt indemnification would not be the right answer. It's a warranty service. We provide the assurances that if somebody adopts Linux, if there are issues in the future, those issues would be resolved. That's something that we're in a great position to be able to do with the development team we have. Do you see more open-source companies going in this direction?
Yes, it probably makes the most sense. It's a service that's valuable and manageable because of the nature of open source where there is full disclosure at all times, not just to customers but in general if someone is proposing that their IP has been infringed upon.