Hawaiʻi's Technology Community

A End to End Walk through our Java Stack...

In this post I will take a walk through our preferred Java application stack from end to end, noting what I consider to be some useful practices along the way. While it's a never ending process to re-evaluate packages and tools as time goes on, I have found that this basic architecture has held up quite well for many years. We've used it on numerous large commercial applications ranging from critical banking and finance apps to high volume entertainment sites. I'm not going to try to get into any depth here on each piece, but just tell you what it is, why we're using it, and add any random comments that come to mind. I'll take your feedback and go into more detail on specific topics in future blog posts. If you're already familiar with all of these pieces individually you might still enjoy skimming my comments on them and telling me how I have it completely wrong :)

The diagram above shows the components of one of our web applications, running from the client to the database. In addition to web clients we can of course drop in Swing or .NET front ends using RMI or web services respectively.

Struts 2 - Get The Simple Web Stuff Done

Struts 2 is a rebranding of the WebWork project, which was a lightweight web toolkit based on the XWork controller framework. In a nutshell, Struts 2 allows you to write page controlling actions as POJOs and then render views from those actions using JSPs with custom tags and an expression language called OGNL. What this means is that you can write simple Java classes that transparently receive data from HTML forms or parameters, execute some logic, and then have their properties available to be drawn upon by expressions in JSPs. In this sense it could be considered a more modern, streamlined version of the original JSP concept, but with the ability to plug in renderers other than JSP.

Web frameworks can generally be characterized on a spectrum from fine grained and component oriented to course and page oriented. Struts 2 is light weight enough that you can push it in either direction, but for the most part we find it useful for the really simple, page oriented stuff. We use more or less an action per page or per group of related pages. As I'll discuss next, if the complexity of a page grows or if pages require anything more than a trivial amount of dynamic / AJAX behavior we simply switch to GWT components and use Struts as the "glue" for putting together the base HTML pages.

An example of a Struts 2 action might be something as simple as the following:
public class CustomerAction extends BaseAction  // a convenience, you don't have to extend this
public void execute() { ... } // business logic
public List getCustomers() { ... }

And in a view JSP you might print the customer's last names like this:
   // invokes getCustomers()
Last Name: // invokes customer.getLastName()

We try to keep the Struts JSPs simple by making extensive use of the Struts include tag to factor our pages into reusable elements:

We also occasionally use the Struts "action" tag to invoke another action and include its output:

The OGNL expression language is a powerful and convenient way to grab data from your actions, however it is a weak link in the otherwise statically typed Java world. Even the wonderful Intellij IDEA platform doesn't yet provide help with this. You'll want to turn on debug mode (in the struts.xml file) while developing so that you can see errors in your expressions (Struts normally squelches them). However you must remember to turn debug mode off in production - it has strangely enormous impact on performance.

GWT - Do the Heavy Web Lifting

If Struts 2 were paving the roads and parking lots of a city, GWT would be building the skyscrapers. If you aren't really aware of the Google Web Toolkit or what it can do yet, you should probably spend the rest of your day today learning about it. GWT allows you to write code using the Java language with much of the standard Java APIs and compile it to incredibly efficient JavaScript that runs in any browser. GWT is an amazing accomplishment that imposes order on the whole disastrous mess that is the web browser environment and gives it a true API with type safety and a real RPC mechanism for talking to the server.

I can tell you with certainty that much of the code we've written with GWT would not have been realistic to write by hand in JavaScript. First, there is simply no way that one could write efficient code targeted at each major browser directly, even with the help of popular JS libraries. Add to that the benefits of static typing - all the way from the front end to the database - and you have a killer stack. This is one of the reasons that I'm writing this blog post.

GWT lets you write domain Java code that runs on the client and looks exactly like any other Java code you might write. The UI classes start as a sort of subset of Java Swing and then bow a bit to the realities of the DOM environment. But if you know a bit about the browser environment (a little JavaScript and DOM) and you can write Java, this is certainly the way to approach your problems.
// GWT code looks like Swing code
public class MyWidget extends Panel {
public MyWidget() {
Button hiButton = new Button("Hi!");
hiButton.addClickListener( new ClickListener() { ... } );

There are a few too many aspects to GWT coding for me to show useful snippets here, so I'll just name a few pieces: In your HTML you embed a line or two of JavaScript and place some empty where you want your GWT components injected. You write an "entry point" class that allows you to execute logic and set up your app and you can create RMI-like RPC services for your client Java code to talk to the server. You can even interact with the DOM and "native" JavaScript in the page, all from the happy place of Java.

During development you can run the GWT "shell", which is a custom browser harness that can dynamically reload your code and recompile classes from source on the fly. This allows you to pair up your classpath with either an internal or external server and make changes on the fly. We use the -noserver option and point the shell at our real application server, allowing us to debug and make changes with our code running real data.

We make use of GWT to create rich client features that can cache large amounts of data in the browser. In some cases we implement "editors" that operate on static HTML in the page. This allows us to have static content that is crawlable by search engines but still editable by users. We also take advantage of a relatively new feature of GWT that allows us to serialize type safe data and store it in the page for retrieval on the client side (alongside untyped JavaScript data). See my previous blog post on this subject. There is just to much good stuff here to cover briefly.

JAX-WS - If You Have to Do Web Services

In recent years we have used both Apache XFire and JAX-WS to expose POJO controllers as web services. Since the standardization of Java WS annotations (JSR-181) the code looks more or less the same. So for the time being we're just using JAX-WS.

Deploying a POJO as a web service can be as simple as adding an @WebService annotation to your class:
public class MyWSController {
public String getDescription() { ... }

and adding an "endpoints" file to your build, along with a tweak to your web.xml:


You can then point web service clients at the WSDL (the descriptor for the service) at a URL like:

or generate Java client code for it using the JAX-WS wsimport command.

In reality, we find that offering a web service API with our products is not quite as simple as just exporting our controllers. Web Service clients can be fairly limited in capability and usually have completely different requirements and security concerns. We find that it is usually necessary to design a different kind of API for those clients than we would use locally or for Java to Java. However, if you are creating GWT RPC services you can usually publish them as a JAX-WS web service with more or less the same API.

There are advantages and disadvantages to maintaining a separate WS API in this way. What at first may seem like "dumbing down" the client API actually leads to much cleaner and simpler code on the client side if done correctly. The key is that you need to go all out in giving the client exactly what it needs exactly when it needs it. The client should never have to calculate something that you can do for them. It should never have to construct URLs or make followup requests for data that you know it is going to need. GWT RPC handles object graphs efficiently, so you can freely return highly linked structures without duplication (e.g. a bunch of threaded messages with parent and child links). But you should never give the client more than it needs to do its job. When you start thinking in this way you will find that making changes to the client becomes much quicker.

The trade-off of course is that to do this you may need a completely parallel set of domain or data transfer objects (DTOs) for use in your web API and you will have to convert back and forth as necessary. We have not found a perfect solution for this problem. Even if you could make your primary domain objects do double duty somehow by adding additional client side fields to them you probably wouldn't want to because they expose too much information - IDs, passwords, etc. We have experimented with providing different interfaces on the domain objects for use in different tiers of the apps. But this is problematic. There really seems to be no easy way around just crafting a new API for the new type of client, at least not that we have found. This is a little easier to swallow if you really consider your WS API a part of your product or service and not just an internal facet of your app.

Spring - Forget The App Server

At its heart, Spring is an XML based, configuration and dependency injection framework. But Spring now also serves as an umbrella for all sorts of tools and application services that can be configured via Spring. The effect is that Spring has grown into what J2EE should have been from the start - a loose confederation of light weight services from which you can pick and choose, as opposed to a heavyweight one size fits all application architecture. I mention J2EE because you may have noticed that it's mostly missing from our architecture. We are not using an application server and really have no need for one. This is largely due to the fact that the combination of Hibernate with Spring's transactions and security facilities eliminates the need for one. I'll talk more about those later.

Spring Dependency Injection - Snapping Your App Pieces Together and Configuring Them

The heart of Spring is XML configuration where you describe how to create instances of your Java components (beans) and how to wire them together. Later you will fetch these beans from the container or, better, tell Spring where to inject them into other beans, allowing it to wire your app together for you. For example, the "mailer" bean below is an instance of the Mailer class and it is configured by its public property setters using the property tags:

There is a rich syntax for creating things like Lists and Maps in addition to simple String and numeric properties and for calling constructors instead of property setters if you wish. Elsewhere in the config we can refer to that bean and set it as a property on our messaging controller class:

In the case above the MessagingController class must either have a public setter named setMailSender() that accepts a Mailer or it must declare a field or another setter of type Mailer annotated with the @Resource annotation (JSR-250) to indicate that the mailer is to be injected.

By creating a tree of dependencies in this way you can eliminate most of the need to go to the container to "look up" services. Some people are very religious about this aspect; we are not. We do lookups from Spring in a few places in our apps. Asking Spring for a bean is light weight and just involves getting the application context and calling getBean():
myApplicationContext.getBean( "mailer" ); // don't use strings like this

However at minimum we do a few things to tighten this up: First, we put a String constant called BEAN_NAME in the interface for anything that we place into Spring, to codify its name. Second, we wrap the lookup in a generic method that does the cast for us (just convenience) and third, we have created a simple utility that reads our spring config and generates a static, type safe getter for each item in the config; we re-run this periodically to regenerate it (this is just an experiment at the moment).

There is a lot more to say about Spring but I won't try to tackle it all here. Most major tools have some level of Spring integration now and as of Spring 2.x it is possible to have custom "domain specific" XML markup in spring config. This makes it much easier to configure standard items and saves a lot of the XML pain. Annotation based injection reduces extraneous setters and makes things a little easier to read.

We do use Spring annotations and Aspect programming support to implement method interceptors for things like logging or special analysis. The way that this works is that we tag a method with an annotation and we are able to intercept the method call with our own code that can inspect the arguments and execute logic before and after the method call ("around" interception). This is such a useful combination of annotations and aspect programming that I personally think it should be built into the language. I'll post more about this later.

Layering Spring Configurations and Managing Environments

For us, the really killer feature of Spring configuration is that you can not only compose files but you can inherit configuration and override it. We use this to manage the configuration for the many deployment environments that come with an real project: unit test, desktop testing, dev server, integration server, production, etc. We have a base configuration file and we "extend" it with a naming convention like config_test.xml, config_dev.xml, config_prod.xml. Within the "child" environment files we can override a bean definition (simply by declaring it again), add new bean definitions, or even extend bean definitions at the property level or merge property collections within them. Most of the time simply overriding or adding suffices. e.g. in the unit test environment we may drop in a mock service or alternate hosts and passwords.

We use a system property to identify the environment and we load the corresponding file. Spring provides a standard way to construct and retrieve the application context in a web application. However we have found it useful to unify all of our environments with our own "SpringFactory" that implements our naming convention and adds a web context listener that injects the spring context into the web environment in the standard place (this is much more trivial than it sounds; again fodder for another post). In this way all of our code can go to one source for any resource that it needs, in any environment.

Spring Security - Openness in Locking it Down

Spring Security is an adoption of the Acegi security framework for Spring that has been around for many years. For quite a while Acegi / Spring Security was more like a "best practices" set of interfaces and basic implementations that helped you build your own security. But more recently it has started providing a lot more benefit right out of the box. This is partly due to better integration with Spring. You can now get basic login security for your web application (totally independent of any app server) with just a few lines of Spring configuration and a servlet filter added to your web.xml.

In the above snippet I've shown the "example" authentication provider that lets us hard code users right into the spring config. Of course you wouldn't do this in a real app; instead you'd use one of the real providers that can check a database table or LDAP or you'd write your own (it's a very simple interface to implement). But given the above config Spring Security will automatically generate a default login form, authenticate the user, and store their credentials in the session. You can customize all of this of course, but I want to emphasize how nice it is having defaults to get you rolling.

The above shows URL based security for a web application, but Spring Security also provides annotation based method interception allowing you to annotate classes or methods with required roles:
@Secured ({"ROLE_USER", "ROLE_ADMIN"})
public void getCustomer( long id ) { ... }

There is a thread-local security context that you can access from anywhere in your app to look at the current user and the roles granted to them procedurally.

The principal will usually be either a String identifier for the user or a Spring UserDetails interface, which you can implement with your own User object and pass down the line for your own use. Spring Security is tremendously flexible and the above explanation only scratches the surface. You can plug in multiple authentication and authorization providers as well as different types of decision makers over them. In general it's easy to plug in your own implementations for any of these pieces.

We find that limiting the application to a handful of user roles is wise and that true authorization (as opposed to authentication) generally requires some application level involvement. Applications tend to have unique requirements for who is allowed to do what to what. While Spring Security does provide some support for ACLs, we usually find it easier to implement our own authorization schemes (either procedural or declarative using Spring aspects) based on the thread-local user context.

We have used Acegi / Spring Security on many projects - both web based and with other types of clients (.NET, web services, etc.). We have used it in combination with custom application authorization and also integrated it with app server security including Weblogic and third party authorization engines such as Securent. Spring Security is one of the tools that really opens up the architecture and helps us get away from a traditional application server regime.

Thread-Local Application Context

I just wanted to emphasize one point made above during the security discussion. Most security packages provide some concept of a thread-local user context where you can determine the current user credentials by virtue of the current thread context. This is accomplished with Java ThreadLocal variables. We find ourselves making more and more use of this to carry other kinds of application context. For example, one of our applications implements highly scalable virtual web hosting and we utilize a thread local context to dictate the current host as well as account information. By limiting (to one, hopefully) the number of places that this kind of environmental information is offered we can minimize the risk of bugs that might leak information across user boundaries.

This technique can be taken too far, however and you should limit the places in your application where you allow yourself to be "aware" of the current context. In the next section I'll talk about this more basic application architecture of controllers, DAOs, models, and utilities and point out how limiting scope keeps these cleaner.

Core Application Architecture - Controllers, Models, DAOs, and Utils

The next bits of our architecture are fairly straightforward, but it's helpful to pin down some definitions.


Our Struts actions and web services contain minimal, "view" related logic. Their primary job is to talk to controllers, which are singletons fetched from Spring. Controllers compose calls to one or more DAOs (Data Access Objects) and other controllers as necessary. All of these dependencies are injected by Spring using the @Resource annotation. We use Spring's declarative transactions facility in both the controllers and DAOs where transactions are necessary. I'll talk more about declarative transactions later.

The controllers represent fairly broad swaths of business logic, but we try to avoid letting them grow to unmanageable proportions. This is an easy pitfall when working with teams of novice developers who do not necessarily think in terms of breaking down code with OO design or proper package structure; they tend to stuff everything into the controllers and allow them to become spaghetti code. The only way to avoid this is with a review process.

An important point that's worth saying explicitly is that we allow our controllers to be "context aware". What I mean by that is that as part of the controller's job of assembling data and implementing business logic, they are are responsible for knowing the current user, security context, and any additional application context. But in our architecture this is as far "down" as we want that context information to be accessed. Everything that comes after: DAOs, model objects, and related utilities, should for the most part be transparent in their data handling (no magic). This makes unit testing and debugging much easier.

One note on naming: We tend to name our controller implementations "Controller" and the interfaces that they implement "Service". e.g. UserMediaController and UserMediaService.

Data Access Objects

We generally have a DAO per persistent Model object (entity), containing the standard CRUD operations and application application specific "finder" methods. Our DAOs extend a generic BaseDao that looks like the following:
public class BaseDao
@Resource SessionFactory factory;
Class clas;

protected Session getCurrentSession() {
return factory.getCurrentSession();

public BaseDao( Class clas ) {
this.clas = clas;

public T get( Serializable id ) {
return (T)getCurrentSession().get( clas, id );

public void saveOrUpdate( T obj ) {

The Base Dao is injected with the hibernate / JPA session factory and is parameterized on the class type it will use in order to allow us a type safe getter and saveOrUpdate() method. Additionally our model entities may extend a base entity that provides basic metadata like insert and update times and we can update those transparently in the saveOrUpdate() method. There are other ways to do this in Hibernate, but we do this for now. I'll talk about the @Transactional annotation next.

In theory, the DAO separation makes it possible to drop in alternate data sources for testing, etc. In practice we never really end up doing this. It's mainly just a logical way to isolate and organize the persistence code. As a general rule we don't allow our DAOs to access other DAOs and certainly not Controllers. We try to keep them as simple and transparent as possible so that when we need to know why a query is returning something we know exactly where to look.

Declarative Transactions

One of the key features that Spring provides and that allows us to break free of the J2EE container is declarative transaction management. What this means is that by simply annotating a method with @Transactional Spring will create a Hibernate session for us and begin a transaction. Upon exiting the method Spring will commit the transaction for us. The code inside the method accesses the "current" session and transaction context through standard lookups (which we wrap for convenience). Spring handles nested transactional calls in the default case by making them part of the same transaction. This means that we can annotate our DAO methods as transactional and also any controller methods that must compose calls to one or more DAOs. In that case the controller method begins the transactions and each subsequent DAO method participates in it; only when the controller method returns is the transaction committed.

An example of a simple DAO methods might look like this:
public User getUser( String userId )
// The string here is HQL, the Hibernate Query Language
return (User)getCurrentSession().createQuery( "from User where userId=:userId" )
.setParameter( "userID", userId )

The @Transactional annotation can be configured in various ways including to perform nested or optional transactions instead of joining, to hint that a call is read-only, and to indicate whether to rollback on exceptions.


The models objects used in our middle tier are our business domain objects and some or all of them end up mapped to the database via Hibernate / JPA, which I'll discuss next. Hibernate gives us a great deal of flexibility in mapping our domain objects to the database, but we do have to think to some extent in terms of database semantics when we design our model. We refer to model objects as "entities" when specifically referring to the persistence context.

It's important to remember that the core concept of model and view do not only apply at this, coarsest, level of the application but occur in miniature everywhere good Object Oriented design is employed. For example, in our Struts Actions we often create composite or specialized (non-persistent) objects to feed data to the view JSPs. I've heard these called "view beans" and they are really "view models" - a model for a particular facet of a view. It's important not to get bogged down in thinking that you can only have "model" phenomenon in this tier.

Hibernate / JPA

The Java Persistence API (JPA) is a standardized set of annotations and a specification for mapping Java POJOs to a relational database. Hibernate has for some time been the premiere object-relational mapping tool for Java and it now implements the JPA. We use Hibernate via the standardized JPA annotations and a few Hibernate-specific extensions.

I've shown a few snippets of hibernate code above but I'm going to have to plead again that I can't really begin to tell you what that you want to know about this topic in a few paragraphs. The basics are that Hibernate allows you to annotate your Java objects as persistent, to indicate that a particular property is the key, and to map its properties to a database table at whatever level of granularity you desire. Hibernate supports polymorphic objects in a flexible way, allowing you to map them to a single table or multiple tables. It provides strategies for dealing with key generation and a query language (HQL). The hibernate session serves as a first level cache and you can easily enable 2nd level (shared) caching of entities and even queries within your application. Hibernate can also automatically perform optimistic locking for you, allowing you to more safely round trip data to the client.

We use Hibernate with as little customization to its defaults as possible, allowing it to base database table and column names on our code. We then generate the schema (SQL to set up the schema) from our code base using a standard Hibernate tool Ant task. Occasionally we must tweak names or types, but this is easy with additional annotations.

Much available Hibernate documentation refers to its older style XML configuration. We have abandoned XML config of hibernate for the most part, both for mapping objects and also for the core configuration. Instead we use Spring to configure Hibernate (Ok, that's XML too) and JPA annotations for mapping. It is necessary to configure Hibernate's session factory through Spring in order to use the declarative transaction facility of Spring. Unfortunately this area is a bit messy right now. First, the Spring and Hibernate people seem to be at odds and so the integration is not as nice as it should be (as of last time I checked). Next, some tools such as the Ant task based schema generator require the old style standalone XML configuration for the session factory configuration. This has lead us to have to maintain two copies of part of our config file. This is annoying, but not terrible as all the tools really care about is the list of mapped files. There may be a way to work around this by now but we haven't re-investigated for a while.

A simple entity might look like this:
public class User
@Id @GeneratedValue
private Long id;

private Organization organization;

private String userName;

The above tells Hibernate to expect (and the schema tool to generate) a table named User with nullable long valued key called id, a text username and a foreign key relationship to another mapped entity called Organization. Parameters on the annotations give you additional control over these mappings if you need them. For example you can specify how ids are generated, what names are used for the columns, how large they should be, and what strategy should by default be used when loading related entities.

The Hibernate query language replaces SQL with a more object oriented view of your data. It is fairly solid and can accomplish more or less whatever you would want to do in SQL, but with a little less database lock in. In a simplistic application you can simply retrieve an object and navigate to its related objects via the object properties (within the scope of the hibernate session) but HQL allows you to do more complex queries efficiently.

MySQL and Data

We have worked with many databases over the years but recently have standardized on MySQL. We used to prefer Postgres, but most of the reasons have evaporated and MySQL is just too popular to avoid. It's extremely easy to deploy and to work with and has simple replication and clustering facilities that are rapidly maturing.

We populate our test / sample data using Java code that we maintain as part of our application. In the past we've used DBUnit and other tools to maintain test data, but we find it easier now to simply build a set of test data using our own application code. Early on we use DAOs and later we migrate to controllers as the app matures. It's easy enough with MySQL for each developer to have their own database instance, so sharing isn't really an issue any longer.

Tomcat, Apache, Linux, the Amazon Stack, and More

Everything we've shown above runs on Tomcat. We don't have a need for an application server at the moment. We may find one in the future as they polish clustering features and other add-ons, but right now we just like to keep it simple. We leverage Apache and HAProxy to load balance across servers.

We run Linux (Fedora or Ubuntu) and we really really love the Amazon Web Services stack. If you haven't checked out EC2 you need to. Being able to fire up linux machine instances and run them for an hour or a year, paying by the CPU hour is fantastic. Other Amazon services that we use include their elastic block store (EBS) which offers detachable raid-like persistent storage, elastic IPs which allow you to throw IPs around instances on a whim, and S3, their bulk storage solution. AWS offers multiple "availability zones" for geographic isolation and recently started offering zones in Europe as well as the U.S. (We can't wait for Asia!) They also recently announced a content distribution network that optimizes your points of presence for pushing data to clients with caches all round the globe. You can get started using this stuff for nothing more than a few cents per hour of computing cost.

We love Intellij IDEA and the Mac OS X platform, but we also use Eclipse and Windows too. I'll also randomly mention that we like SquirrelSQL for our SQL browsing needs.


I hope that this post, outlining our end to end preferred architecture and practices may offer something useful to you and I look forward to following up in future postings. This was partly an experiment to see if a broad post would be as popular as a deep posting. I leave it to you to decide if this was helpful or just prohibitively lengthy ;)

Views: 1102


You need to be a member of TechHui to add comments!

Join TechHui

Comment by Tim Dysinger on March 21, 2009 at 2:08pm
Boy I am glad I stopped doing Java. 10 years is enough.
Comment by Joseph Lui on March 2, 2009 at 10:15pm
"However I think that the starting points are kind of an apples to oranges comparison."

Ah, the comparison was not meant to be between your particular Java stack and RoR; it was a comparison between what I am calling the "best-of-breed" versus an "integrated" strategy, using RoR as an example of the latter. Clearly, we could have chosen a simpler Java stack if we wanted to do a more apples-to-apples comparison. The takeaway is that you get more function and flexibility at one end of the spectrum; while you get substantially lower costs, especially in terms of human capital, in the integrated approach.

Thanks for the C-JDBC link! That's exactly what I was referring to.
Comment by Bill on March 1, 2009 at 12:58am
So Pat, once I used freemarker instead of jsp's, and found it more enjoyable to work with although there were some annoyances. Would that break the proposed stack?
Comment by Philip Johnson on February 26, 2009 at 6:42am
Very nice post! Two libraries we use that are related to your stack:

(1) Wicket:

After fooling around with GWT for about a year, we moved to Wicket and are much happier. For us, it seems to have a nicer integration with AJAX and provides better facilities for componentizing UI elements in Java. There are people on both sides of the Wicket vs. GWT debate:

(2) Restlet

I totally agree that "offering a web service API with our products is not quite as simple as just exporting our controllers". Our system is a SOA, and so a high quality RESTful API is basically the most important UI design feature. Restlet is a very elegant framework that provides "natural" constructs for REST concepts.

Comment by Pat Niemeyer on February 26, 2009 at 5:18am
BTW, what happened to that sql adapter you mentioned before that lets you scale out databases pretty much indefinitely?

You are probably thinking of C-JDBC. C-JDBC lets you arrange DBs in a raid-like fashion for redundancy or striping. A lot of the stuff that MySQL has done with clustering addresses the same issues, but C-JDBC gives you leverage that is transparent to the application and can be used with any jdbc accessible database.
Comment by Pat Niemeyer on February 26, 2009 at 5:12am
Contrast that with RoR for example where most of the above discussion is rendered moot, not because it's insignificant, but because it is already set and known by all based on convention.

I agree that the RoR scaffolding and conventions make it a lot easier to start a new project or to find the major elements of an existing project and I wish that Java had grown up with more project level conventions handed down from above. However I think that the starting points are kind of an apples to oranges comparison.

I believe that by the time you add all of the features in the above stack to RoR (e.g. security, threading, richer OR mapping) and something equivalent to GWT (if there were such a thing ) that RoR will start to look a bit heavier. And that's fine, it will still be sleeker than the above. But if you end up using most of those features you're going to need to have about the same amount of knowledge either way.

It seems to me that as parallel, competently designed RoR and Java projects grow in size they will converge on the same complexity and then it's just a matter of what the respective languages and libraries give you. As you can probably imagine I'm partial to Java because I believe it makes these larger projects more maintainable. But I am growing to like Ruby more the more I work with it too... It's just replacing different kinds of projects for me.
Comment by Joseph Lui on February 25, 2009 at 11:11pm
Man, in the time it took me to read this post not only would I have been done with the project using IBM Websphere, I would have single-handedly integrated the app into our inter-departmental ESB backbone AND fully complied with our new, game-changing SOASP governance initiative.

I'm joking! I'm joking! Relax. Thanks for sharing your preferred application stack. I would describe this architecture as a "best-of-breed-strategy" in the sense that you take (arguably) the best Open Source component for the job at each layer of the stack and combine them all together. At the other end of the spectrum, I would contrast this with an "integrated" strategy where the entire stack is pre-determined and each level of the stack is designed by basically the same entity.

The single greatest advantage of the former I believe is increased functionality and flexibility. You can usually do a whole lot more at each level that an integrated stack does not afford. You also enjoy the power of flexiblity where you can replace levels of the stack with the best of breed that suits your project's need or the times. The single greatest advantage of the latter I believe is decreased cost, specifically human capital. For the former, you have to have a commanding knowledge of all the different parts AND THEN how they ultimately hang together, and that's just the beginning! For every developer you bring on, you have to break each one in on a higher learning curve. As my .Net friend said once: it's not that Java is better, it just seems better because it requires a genius to use it.

Contrast that with RoR for example where most of the above discussion is rendered moot, not because it's insignificant, but because it is already set and known by all based on convention. No need to shop for parts; no need to experiment how they fit together; no need to make and convey your own patterns; no need to even architect! Which approach is better clearly depends on the project requirements in question.

That discussion aside, part of me mourns the gasping, dying sounds of JEE. Despite the 80% "failure" rate of JEE projects, surely there must be some redeeming value to the app server, the EJB container, the glorious guts of EJBs themselves, and all the other JEE services that most people... uhm.. er... have no use for.

BTW, what happened to that sql adapter you mentioned before that lets you scale out databases pretty much indefinitely?
Comment by Pat Niemeyer on February 22, 2009 at 5:01pm
I should clarify that we do use the Spring annotation based DI as far as the @Resource markup goes. We still define our beans (controllers, DAOs, etc.) in XML however. I have not read up on the very latest change in Spring but at the time I reviewed the annotation changes in 2.x it looked liked we'd still need XML to set up hibernate, spring security, etc. So we've just stuck with it. The Spring XML configuration is one of those things that is a little tedious, but I spend far less than 1% of my coding time messing with it and so it's just not worth a lot to me to think about it.

I am interested in Grails but not up on its progress. It sounds like it has potential. But again, writing the DAOs 99% of my time is spent in the domain problems writing finders and thinking about queries, so I just don't care that much. I have created a little template in Intellij IDEA that creates a DAO template for me and that takes care of the boilerplate.

As for GWT, I think the Google people created this for themselves to use and I they are some pretty experience guys :) As I said in the post, I think people are doing things with GWT that would just be too hard to try to tackle without it. For example, we have written fairly elaborate parsers that interpret user entered text. They would have been really painful to write without the benefit of working in Java. Similarly, for elaborate UIs having statically typed containers via generics in Java lets your IDE do a lot of work for you... I wouldn't give that up for anything.
Comment by J. David Beutel on February 22, 2009 at 2:14pm
I've found the annotation version of Spring DI easier to maintain than separate XML files. I realize that some people prefer XML, however.

I've just started drinking the Grails cool-aid, so wonder if you're familiar with that stack. E.g., GORM versus do-it-yourself DAOs. How is the Grails GWT plug-in? Should I try learning GWT along with Grails? Is GWT's advantage mainly for GUI programmers just starting on web apps, or does it maintain an advantage with experienced web app developers?
Comment by Cameron Souza on February 22, 2009 at 12:08pm
Wow. Thank you! Is this a preview of the next book?


web design, web development, localization

© 2024   Created by Daniel Leuck.   Powered by

Badges  |  Report an Issue  |  Terms of Service