Why did developers start using IDEs? The single-tool experience. Developers always have so much to learn and remember:...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
the environment, the application, the domain requirements… the list goes on. At least IDEs mean that there is only one toolset to learn and remember. You can think of an IDE as a single operating console for the developer -- there's still a lot to it, but at least there is consistency in approach and operation. Why are IDEs getting so complex, and is that complexity killing their usefulness? It seems like everything in our world grows more complex with time, and IDEs are no exception. Some of that complexity is just vendors getting caught up in feature races without consideration as to real benefit, but another element of complexity is IDEs trying to keep up with new development paradigms and methodologies. Is this killing their usefulness? Sure. At some point it becomes easier to use a handful of special-purpose tools rather than learn how to find a capability buried somewhere in a bloated IDE. We all learn tend to learn the parts we use every day and ignore the parts that are either difficult or rarely used.
Look forward, not backwards. Understand the methodology and approaches that you will be using for development going forward over the next few years, then find the IDE that best represents that approach.
I always say that all software has, at its core, a set of basic beliefs and philosophies that can be discovered through how it solves problems. This is true of IDEs as well. Microsoft IDEs tend to be visual and user-interface centric, representing their desktop beliefs and history. Java-based IDE's tend to be better code editors and organizers, as Java coders are more fond of writing code. Find the IDE that matches your style, not just your feature wish-list.
With all this in mind, look for the simplest way to do the job. If it isn't simple "out of the box" software, make sure it can be configured to be simple for the jobs you do most often. Someone once pointed out that software users should be novices for only a short period of time, but experts for the life of the job. So an IDE should let you be an "expert" at the tasks you do often with very little interference.
Finally, if you run a multi-developer shop, make sure that the IDE encourages, if not enforces, the practices that you want your developers to follow. Traditionally, methodology and practices have been enforced only through discipline and things like code reviews. Some of the better IDEs make it easier to configure the IDE to encourage or enforce your standards directly, saving review time and ensuring a more consistent product output. When did SOAs come on the scene, and what value do they bring?
Many of the principles are well over 20 years old. We've all believed in re-use for years, but rarely have we put our beliefs into practice. An SOA is really a set of principles and standards that help define re-use at the right levels and make it easier to implement. As such, it really has been a popular concept for just three or four years. Could you offer some best practices for using SOAs?
There are many angles on this, some having to do with development practices, others concentrating on deployment environments. As an application developer, I'll concentrate on the former. After all, deployment options are pretty limited if the applications are not architected correctly in the first place.
In looking at SOA as it pertains to application design and development, the most critical point is to start from the business process, then work outwards. The automated business processes really represent the catalog of services that must be built. Traditionally, we tended to start at the user interface level and work our way "down" through the application (the old "inputs and outputs" approach to development). But with SOA, we need to start by understanding and automating each service, with a full understanding that the interactions between these services must be as flexible and portable as possible. From there we can build integration and user interfaces that form larger composite operations.
So the guidelines are these:
- Catalog and understand the business processes to be automated. Each represents a potential service. Service granularity is the key, and you can't figure that out without understanding the business processes.
- Understand which services will likely need "public" (outside the application) exposure and which can reliably be categorized as internal only. Be conservative, as you'll be surprised at how many things that you think will always be internal-only wind up being used outside a single application.
- Build the services as if there wasn't any human interface required. Often, there won't be, and even if there is you want to design it such that the user interface can be upgraded, replaced or eliminated without affecting the operation of the service.
- Plan for flexible deployment options. Services will tend to get scattered between servers, operations, networks and user interface approaches, so you'll need to be flexible in your interface and network expectations. Ask yourself if the service will still perform correctly if it is implemented as a Web service on the other side of the globe.
- Understand that user interfaces are nothing more than an alternative form of inputs and outputs. They shouldn't encapsulate processing rules, they shouldn't dictate workflows, and they shouldn't impose process stops in the middle of a service. Any user interface should be able to be replaced by an integration system and/or an automated event system in the future. And the future is closer than you think.