Tuesday, December 30, 2008

stuff change on you...

I am sure at some point in time we all make a judgment call to go with a piece of software based on its merits. The criteria for what constitutes a merit differs from one individual to another. We come up with a set of criteria and a set of softwares to choose from. We subject the software to the criteria and make a call. All well and good.

If for some reason the software we choose is the choice of many others in our 'sphere of interest' we have a hit. And if we are an outlier, we end up in one of two scenarios.

1. Either we really have made a good choice and the market around has to 'see' the light and catch on, or
2. We really missed the mark and we have to re-analyze our choice and 'catch-up' with the rest.

Of course, the realization of 'hitting the mark' itself is relative. I'm sure there are some die hard fans of EJB2 who will still say that Hibernate and other ORM's have not got it.

So, what would you say, a person trying to make a call between EJB2 and Hibernate?

One can't say for sure. Hear this argument.

Joe: It does not matter what features EJB2 and Hibernate have, our company policy is not to use open source products. All products have to comply with industry standards.

Mark: But what is 'industry standard'. Isn't it what the 'industry' accepts as a standard.. meaning us, developers/architects. If we say Hibernate is a better choice, so it is a standard.

Joe: Possibly, but our policy writers do not see it like that. If it does not come from the Suns, IBMs, Xs or Ys, its not a standard.

Mark: but..

So, you see, Joe and Mark agree. But still have to disagree due to the criteria used for making a choice.

Then there are re-incarnations.

Remember when you looked at JSF when the world was going with Struts and its sisters. Its too dotnet. Its too component oriented. Its not MVC enough. Its this and its not that. The 'sphere of interest' goes with Struts, Webworks, SpringMVC etc. etc. And then JSF comes with a re-incarnation. It has the blessing of the big names Joe was looking for. It has the blessing of 'us, developers/architects' in open source implementations of JSF that Mark was looking for.

So, what would you say, a person who made the choice of JSF at a time when it did not make the cut.. and stuck with it. Is it because that person 'saw' the promise and potential and had the foresight of its success? Or is it just that the person gambled with it and it paid out?

Its the similar story with EJB in its new EJB3 re-incarnation.

Then there is stuff that changes on you.

How often do you make a choice of software with due diligence. Even the 'sphere of interest' is with you. Things are going fine.. You bet your projects and your raise on it. And then it hits you. The people responsible for your choice software bail out on you. The product is no longer supported.

Recently I was looking for a tag library to use jfreecharts.. Cewolf was the number one choice. Even documented and recommended by jfreechart folks.. But its not supported anymore.

Would you use it in your project? (on a side note, I'm writing my own now.)

Another case in point is Appfuse. I have used it multiple times.. with JSF and SpringMVC. Every time I have to use Appfuse, I've to make a call, which MVC framework to use. And to be frank, I had gotten to like SpringMVC. But it seems appfuse may drop support for springMVC. (http://raibledesigns.com/rd/entry/appfuse_light_converted_to_maven). I hope I read it wrong.

So, what would you say, a person who made a choice of SpringMVC with Appfuse, and then the folks at Appfuse possibly dropping SpringMVC support?

One could say, 'Its a price for getting stuff free.'.
Or one could say, 'Get a life.. what else do you expect. It is the hazard of the landscape.'
Or as Arnold would say, 'Stop whining.'

I say, Is it even something one should feel averse to? Is it something that is not good for you? Isn't change supposed to be good?

Yes, sometimes you change your choice, sometimes the 'sphere of interest' changes its choices and sometimes someone else changes your choice. Its an opportunity.

Its an opportunity to learn something new. Its an opportunity to make yourself adapt.. and be agile.

Yes, a root canal at a dentist's office hurts.. but its good for you.

Tuesday, September 16, 2008

Hibernate and connection pools

I recently spent some time researching and fixing a bug in one of my applications. This is just a summary of its solution, for all those it may help.

Problem:
Mysql has a default time-out on idle connections. The default is 8 hours. If an application uses a connection that has been sitting idle in a connection pool beyond the database time-out period, the application throws an exception.

Solution:
Well, firstly, I thank all those who wrote on blogs/mailing-lists that I had to read to come to the solution, some of which I will quote from here.

To reporduce the problem, I first had to re-create it for testing. I updated mysql's my.ini (windows file, use the corresponding file in Unix/Linux environments) file. I added a wait_timeout value of 120 seconds.


[mysqld]
wait_timeout=120


This immediately produced the error after 2 minutes of reusing an idle connection.


org.hibernate.exception.GenericJDBCException: Cannot release connection
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:29)
..
..
Caused by: java.sql.SQLException: Already closed.
at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:84)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:181)
at org.hibernate.connection.DatasourceConnectionProvider.closeConnection(DatasourceConnectionProvider.java:74)
at org.hibernate.jdbc.ConnectionManager.closeConnection(ConnectionManager.java:451)
... 44 more


I was using commons-dbcp connection pool as a spring managed bean. Since I used appfuse's spring mvc archtype as a quickstart, it came configured with the commons-dbcp. (http://appfuse.org/display/APF/AppFuse+QuickStart).

Gavin King from Hibernate suggests not to use the commons-dbcp as it is faulty. (http://opensource.atlassian.com/projects/hibernate/browse/HB-959) So, taking his advice, my first take was to change the datasource to use a different one.

So, the next question was, if not dbcp, then which one? After a while I stumbled across this connection pool that hibernate supports. I must say, I had not heard of it before, and with an acronym (c3p0) it is not forgetteable. (http://www.hibernate.org/214.html)

Accordingly, I first changed the bean definition from


<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<property name="maxActive" value="100"/>
<property name="maxWait" value="1000"/>
<property name="poolPreparedStatements" value="true"/>
<property name="defaultAutoCommit" value="true"/>
</bean>


to the bean definition that uses c3p0:


<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" >
<property name="driverClass" value="${jdbc.driverClassName}"/>
<property name="jdbcUrl" value="${jdbc.url}"/>
<property name="user" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>


Note the difference in some of the property names between them. driverClassName/driverClass, url/jdbcUrl, username/user etc. Since these are bean properties, and not resource definition (such as for a web container), the names can be whatever the bean-writers choose them to be.

Refer to this link for more discussion about setting up c3p0 as a bean.
http://forum.springframework.org/showthread.php?t=16309

As you can see, I did not set any other property in the bean definition. The reason requires a little bit of explanation.
In another link (http://forum.springframework.org/showthread.php?t=13078), you can see how the properties can be set.


<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" >
<property name="driverClass" value="${jdbc.driverClassName}"/>
<property name="jdbcUrl" value="${jdbc.url}"/>
<!--<property name="user" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>-->
<property name="properties">
<props>
<prop key="c3p0.acquire_increment">5</prop>
<prop key="c3p0.idle_test_period">100</prop>
<prop key="c3p0.max_size">100</prop>
<prop key="c3p0.max_statements">0</prop>
<prop key="c3p0.min_size">10</prop>
<prop key="user">${db.user}</prop>
<prop key="password">${db.pass}</prop>
</props>
</property>
</bean>


But, if you set the properties in this manner, c3p0 does not pickup the user and password as a regular 'property', you have to specify them as prop's in the properties 'property'. So, I yanked all from here, but kept the user and password. I moved all other properties to a c3p0.properties file.


c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=100
c3p0.initialPoolSize=5
c3p0.maxIdleTime=80
c3p0.maxPoolSize=10
c3p0.maxStatements=0
c3p0.minPoolSize=5


One thing that is easily 'missable' is the names of properties. They are named differently in hibernate for the corresponding properties in c3p0. For example, the property of most interest c3p0.idleConnectionTestPeriod is named in hibernate as c3p0.idle_test_period. That makes me wonder, if the property was correctly set in the second springforum link I quoted. (http://forum.springframework.org/showthread.php?t=13078)

So, if you set your c3p0 properties in a c3p0.properties files, you should use c3p0 property names, and in hibernate config files, you should use hibernate co-equivalents.

Another note of caution, even though it is mentioned in passing in the hibernate document (http://www.hibernate.org/214.html), is when you set any of the hibernate cp30 properties, there are 7 properties that hibernate overrides. So you should set all those properties in hibernate, if you do not want hibernate defaults to override cp30 settings or defaults. You will find a reminder in cp30 documentation as well. (http://www.mchange.com/projects/c3p0/index.html#hibernate-specific)

Here is my hibernate.cfg.xml snippet.


<session-factory>
<property name="connection.pool_size">10</property>

<property name="c3p0.acquire_increment">1</property>
<property name="c3p0.idle_test_period">100</property> <!-- seconds -->
<property name="c3p0.max_size">10</property>
<property name="c3p0.max_statements">0</property>
<property name="c3p0.min_size">5</property>
<property name="c3p0.timeout">80</property> <!-- seconds -->
..
..


As you can see they are same as that in c3p0.properties file. In any case it does not matter, hibernate values will supercede any corresponding value set in c3p0.properties file.

So, what do we have here. We have idle_test_period as 100 seconds, and we had kept mySql's wait-timeout as 120 seconds. This means that the connection pool will discard any connection that has been idle for 100 seconds, and thus will not get used.

The logs show evidence.


DEBUG [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0] C3P0PooledConnectionPool.finerLoggingTestPooledConnection(315) | Testing PooledConnection [com.mchange.v2.c3p0.impl.NewPooledConnection@17fd168] on IDLE CHECK.
DEBUG [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2] C3P0PooledConnectionPool.finerLoggingTestPooledConnection(319) | Test of PooledConnection [com.mchange.v2.c3p0.impl.NewPooledConnection@f5b2da] on IDLE CHECK has
SUCCEEDED.
DEBUG [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1] C3P0PooledConnectionPool.destroyResource(468) | Preparing to destroy PooledConnection: com.mchange.v2.c3p0.impl.NewPooledConnection@70ccb
DEBUG [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1] NewPooledConnection.close(566) | com.mchange.v2.c3p0.impl.NewPooledConnection@70ccb closed by a client.
java.lang.Exception: DEBUG -- CLOSE BY CLIENT STACK TRACE
at com.mchange.v2.c3p0.impl.NewPooledConnection.close(NewPooledConnection.java:566)
at com.mchange.v2.c3p0.impl.NewPooledConnection.close(NewPooledConnection.java:234)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.destroyResource(C3P0PooledConnectionPool.java:470)
at com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask.run(BasicResourcePool.java:964)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)


So, a long story in short.

Use c3p0 as your connection pool if you are using hibernate. c3p0 can be configured as a resource bean in Spring. Set hibernate properties for c3p0 to override the c3p0 defaults and hibernate's own c3p0 defaults.

Happy programming.

By the way, I like this convention over configuration thing, but care is needed. I may write about it in another post.

Sunday, August 10, 2008

Unit Testing in Software Development and Junit

Late adoption of certain practices mean that you talk about a technology or a process as something that deserves a second look, whereas, the industry around you considers it a 'norm'. In my case such is the case with incorporating unit testing into development process.

Sometimes one gets too cosy with their SystemOutPrints or ResponeWrites for testing software, or use log4j (well, i've another story for log4j's late adoption) for verifying tests, that one wonders, dosent't 'log' in log4j stand for some kind of logging, and not testing.

Lets see, when we talk about testing, we talk about a piece of functionality, encapsulated in a single method, or a sequence of method calls behaving as desired. So, when we test, we know the outcome, and we want to verify that the outcome of an actual run is as it is expected. A SystemOutPrint or a log command, may serve this purpose but there has to be a 'human' sitting on a terminal reading the output to verify that. Lets say in a system with 10 classes with 10 methods each, there are 100 calls to verify. How can that be automated?

Before going into unit testing with junit, let me quantify what's wrong with using SystemOutPrint (SOP going forward) or log4j for testing. With objections out of the way, we can discuss how junit addresses such instances.

1. clutter
Putting SOP's clutter your code with statements one has to remove after the code is 'tested'. In many instances, a developer either does not remove them or keeps them commented, for 'just in case I need to test it again'. So, your console output from application is cluttered with unwanted output and your code is cluttered with unwanted lines of code. Not to mention, when a developer wants to test again, he/she has to either type in SOP's again or uncomment the existing SOP's..

2. misuse
We all agree that log4j is for logging purposes. And SOP is more generic in use, and cannot be just tied to logging. SOP's have a place and logging has a place. But their place is not for testing. They can be utilized alongside testing, for logging and printing test results to console, but they are not test tools. Testing should be executed and the system should tell you if all the tests succeeded, irrespective of a person parsing 3000 lines in a log file trying to figure out the test results.

3. non-repeatability
When a developer tests using log statements, it is tested when it is coded. Seems like a redundant statement. But read on with me. Lets say a piece of functionality is tested and deployed using log statements. Another developer comes along and modifies your code to add more functionality. He/She tests his/her work using logs and is satisfied with his/her results.. You can't disagree with me for the possibility of a bug being introduced in your original piece of work. If such a bug gets introduced, who tests for that? and when? So, tests should be repeatable, so if any change occurs in your code or else where, it does not affect your original coded behaviour.

4. undocumented
Since using SOP's and logs are left at the needs/desires of an individual developer, there is no need or lack thereof to use or not use them. Accordingly, a developer new to the team or team that inherits an application has no clue what is tested and what is not. Also, all the SOP's that made sense to one developer or a team is 'noise' to another developer or team.

5. non-intutive
If a system is tested using SOP's and logs how can one answer this simple question : How do I test this thing? You can't.. A person doing the testing has to have intimate knowledge of the code and has to know what to look for in the logs to test such systems. Completely impractical. Wouldn't it be easy to just answer 'hey run the 'test' target in ant' or 'run maven and don't skip the test' or something like that.. And your build tool runs all the tests and gives you the results..

I think these 5 points convey my point. So, how can these issues be addressed. Let me explain using a junit example.

Lets begin with a class that we want to test. Please keep in mind, that the example will be really simple, so as to convey the point. You can read more documentation at junits website.(*1)

public class Doodle {
public int add(int a, int b) {
return a+b;
}
}

The Class Doodle has a method add which adds two numbers. We need to make sure it adds correctly. (One might say, a+b should add.. what's there to test? yes, I know.. this is just an example.. pretend for a moment that the implementation is pretty complex and needs testing..)

How do we go about testing this without using the dreaded SOP's and log statements.

We write a test class. Here is how it looks.



import org.junit.Test;
import static org.junit.Assert.*;

public class TestDoodle {
private Doodle doodle;

@Before
public void setUp() {
doodle = new Doodle();
}

@Test
public void testDoodleAdd() throws Exception {
int result = d.add(2, 2);
assertEquals(4, result);
}

@After
public void cleanUp() {
doodle = null;
}
}

We have a TestDoodle class, with three methods.
setUp() - Its annotated with @Before, which tells junit to run this method before every test method
testDoodleAdd() - Its annotated with @Test, which tells junit to run this method when this class is invoked for testing
cleanUp() - Its annotated with @After, which tells junit to run this method after each test method.


In testDoodleAdd(), we invoke our add method by sending, 2 and 2 for test values, and we get the results back in result variable, which should be 4.
As mentioned earlier, I need to know what the result is before running the test, to make sure that the test is successful. That is happening in the junit method assertEquals(expectedValue, actualValue). If this method returns 'true' then the test is successful, and if not the test is a failure.

How do we run it. It can run on a command line, but I'll give an 'ant' snippet.

<target name="test" depends="compile">
<junit fork="yes" haltonfailure="yes">
<test name="TestDoodle" />
<formatter type="plain" usefile="false" />
<classpath refid="tests.path" />
</junit>
</target>

So, the test can be run by running 'ant test' on command line. (note, this snippet depends on 'compile' task, and 'tests.path' value, which are not included. but I think the intent is clear)

Now, let us see how it stands for the above-mentioned objections to using SOP's and logs.

1. no clutter
As it is evident, Doodle.java is clutter free.

2. no misuse
There is no unnecessary use of SOP's and log statements for other than their intended purposes. (I know, there are none in the example, but I could log the add operation for logging purposes.. not for testing..)

3. repeatable
This test can be run any number of times, without any side-effect. Each time the class is modified, this test, in addition to any new test can be run. If any of the results change due to code changes, it will manifest itself next time the test runs.

4. documented
There is a TestDoodle class, and its purpose is to test the Doolde class. And, this class along with its annotations, and comments (if any) is clearly a clean way of understanding which tests are being performed and what the expected results are.

5. intutive
Ok, I know a person has to know junit is being used, and in my case 'ant' is used to run tests.. But most of that is 'staple' for java development nowadays. It is very unlikely any experienced java developer will not know what is going on after looking at the ant's build.xml file and corresponding test classes. So, it is very intutive by convention.

In summary, incorporating unit tests in a development process is necessary for many other reasons other than the general concept of testing. And if junit makes it easy, why not use it.

(*1) http://www.junit.org/

Monday, July 28, 2008

Why Magnolia

I chose Magnolia (www.magnolia.info) again for my third Content Management Solution. And, I still like it. Though Magnolia is capable of much more, following are the reasons I prefer it.

1. administrator managed public viewing site

Magnolia comes with role based security. It is extendable as its security is based on JAAS (Java Authentication and Authorization Service). Branches of the site can be secured based on role and roles can be configured with read/write access rightes. Roles and permissions can easily be managed and modified.

2. wsywyg editing

Magnolia adds its own javascript based controls for editing the pages. The controls apprear in edit mode and are hidden in preview mode. They do not exist in published pages. I would not call it 100% wsywyg editing, as Magnolia does not allow you 'inline' editing of the pages, but it allows you form based editing in popup forms. It still qualifies for wyswyg as the preview mode of the page is exactly how a published page would look like.

3. complete control over look & feel of the site

Since you as a developer create templates for the page layouts, and templates for the paragraphs, you have 100% control over how the site looks and behaves. I think this is the 'most' important asset of Magnolia. I've seen many other CMS solutions that allow you to create a page and paragraph layout with web based tools, but with any web based tool, there come certain limitations. With Magnolia, since a developer codes the templates, there is no such limitation. (Yes, the newer versions of Magnolia have some online template builder tool (Sitedesigner), but I have not used it, so I cannot comment on it. Also, it is part of 'enterprise' build)

4. database independent implementation

Magnolia stores its data using jackrabbit Content Repository. And, since you as a developer create dialogs (to capture data) and paragraphs (to display data) and you use magnolia's tag libraries to access data for publishing, there is not much jackrabbit know-how necessary to create a magnolia solution. But the fact that you do not have to configure a database as part of implementation is a plus. One less thing to worry about while deploying your magnolia solution.

5. a simple workflow, create - preview - publish

The workflow is simple. A content manager creates a page, previews it and if satisfied publishes the page. I think, for a vast majority of CMS solutions this simple workflow works and suffices. As a developer you can stack multiple installations of magnolia to create more elaborate workflows, like create - preview - review - confirm - publish, but I have not implemented such a system. I have read it is possible, but I'm not sure what baggage it comes with. I would still recommened a good understanding of Magnolia and its capabilities in expanding the workflow before you 'reject' it for any other tool, if a more complex workflow is required.

What else?

Now, is there something that I would like to see in Magnolia that it does not provide? Of course yes. No tool is perfect and no tool gives you 100% of what you want. The reason a tool is a 'good' tool depends on how easily it 'allows' you to reach your 100% beyond the 'X'% it gives you.

So, I would like that Magnolia provide a good 'eclipse' project, so that developing templates is much easier. (No, I'm not saying an 'eclipse' project to work on magnolia code itself.. I'm talking about an 'eclipse' project for creating and managing templates and other code artifacts (such as docroot artifacts))

I would also like that it is more seamless to adding custom code, like servlets and jsp's. Yes, it does allow, but it should be more seamless. And good integration of jsp's that read managed header/footer from magnolia templates.

I would also like more pre-built modules, like a blog or a message board. Though I'm not sure if that really adds value to Magnolia. But it sure will make it more competitive, as its competitors 'boast' of such 'out of box' features.

So, in short, Magnolia is a good tool to create highly customized and managed public domain websites.

Monday, June 9, 2008

ORM Models : Hibernate with Annotations and Cayenne

Before I used Hibernate (*1), I played around with Cayenne (*2). I liked its Visual Modeler. It allows you to create models, and map them to database fields. It also generates Java Code and sql scripts. All packaged up. I liked it. Untill I worked with Hibernate.

I must admit, I tend to not like things that get too much hype, and sometimes the hype is much deserved. So, in such cases I kind of 'miss the boat'. Hibernate falls in that category.. But, in the end, having a better grasp at competing technologies never hurts.. Let me explain, and I will limit the discussion to few items, which I feel are a 'make or break', at least for me, and please take it for what its worth.

Lets take Cayenne first. Lets take a look at its generated model. Or should I say 'models', as it creates two files.

public class Blog extends _Blog {
...

}


pretty small.. Wait till you see the _Blog


public class _Blog extends org.apache.cayenne.CayenneDataObject {

public static final String BLOG_DESCRIPTION_PROPERTY = "blogDescription";

public static final String BLOG_TITLE_PROPERTY = "blogTitle";

public static final String DATE_CREATED_PROPERTY = "dateCreated";

public static final String DATE_MODIFIED_PROPERTY = "dateModified";


public static final String BLOG_ID_PK_COLUMN = "BLOG_ID";


public void setBlogDescription(String blogDescription) {

writeProperty("blogDescription", blogDescription);

}

public String getBlogDescription() {

return (String)readProperty("blogDescription");

}


public void setBlogTitle(String blogTitle) {

writeProperty("blogTitle", blogTitle);

}

public String getBlogTitle() {

return (String)readProperty("blogTitle");

}


public void setDateCreated(java.util.Date dateCreated) {

writeProperty("dateCreated", dateCreated);

}


public java.util.Date getDateCreated() {

return (java.util.Date)readProperty("dateCreated");

}


public void setDateModified(java.util.Date dateModified) {

writeProperty("dateModified", dateModified);

}


public java.util.Date getDateModified() {

return (java.util.Date)readProperty("dateModified");

}


}


So, what is the issue..

Firstly, Cayenne creates a model, which is for you to extend and customize, which extends another class (_Blog.java).

The _ in _Blog.java means, 'do not touch'. Cayenne reserves the full right to over-write it. This is a problem, in at least one of these ways:
  1. If I have to modify the model, I have to use the 'Modeler' and then 're-generate' the code.
  2. I am tied to Cayenne for the model class, and the proof? _Blog.java itself extends org.apache.cayenne.CayenneDataObject
Secondly, Blog.java is not a POJO.. It extends _Blog.java and that itself extends CayenneDataObject (Does it have to be a POJO? No, but Does it help if it is a POJO? Of Course..)

Thirdly, Can you find getBlogId() or setBlogId(...) in the _Blog.java ? Neither do I. Cayenne documentation says, "Normally it is not advisable to map primary and foreign key columns (PK and FK) as Java class properties (ObjAttributes)." (*3) But they do not explain the wisdom in it. So, you have to write an elaborate custom method to get the id from the model itself (see the referenced link). But to set an id, that is still a no-no. Of course, you may agree with the methodology of not allowing access to Id from model, but I tend to like ot have it.

So, to keep it short, I'll just limit my discussion to these three points. And no, I'm not skipping Hibernate. Here I'll just relate Hibernate with these three points.

Lets see a Hibernate Model class.

@Entity

@Table(name="CLIENT")
public class Client implements Serializable {


@Id
@Column(name="id", nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Long id;

@Column(name="name", nullable=false, length=255, unique=true)
private String name;


@Column(name="account_manager",nullable=true, length=255)
private String accountManager;


@Column(name="account_owner",nullable=true, length=255)
private String accountOwner;


public Client() {


}


public Long getId() {
return id;
}


public void setId(Long id) {
this.id = id;
}


public String getName() {
return name;
}


public void setName(String name) {
this.name = name;
}


public String getAccountManager() {
return accountManager;
}

public void setAccountManager(String accountManager) {
this.accountManager = accountManager;
}


public String getAccountOwner() {
return accountOwner;
}


public void setAccountOwner(String accountOwner) {
this.accountOwner = accountOwner;
}


}


This example of the Model class uses Annotations, though all of this can be accomplished using an XML file. But I prefer Annotations, and using Annotations has its benefits, which I won't mention here, as I'm discussing Model classes.

So, First issue : I generate the Class. Even if there exists tools that generate the Model classes, there is nothing stopping me from writing or modifying it.

Secondly, the Client.java extends Serializable, which qualifies it for a POJO. I know, I know, this class has javax.persistence.Entity, javax.persistence.Id and similar annotations, but, there is nothing that prevents me from instantiating a Client Class, and setting all the properties and have it then persist..

Thirdly, getId() and setId(...). Ahh. Dosen't that make you breathe easier.

In any case, I would use Hibernate just for these reasons, even though there are other equally compelling reasons to use it. Like, javax.persistence/EJB3 supported annnotations, the Hibernate model above could be used with JPA without changing a single line of code. (Of course, a Hibernate DAO will differ from JPA DAO), extensive Maven support for Hibernate and other 'nice to haves'.

(*1) http://www.hibernate.org/
(*2) http://cayenne.apache.org/
(*3) http://cayenne.apache.org/doc20/accessing-pk-and-fk-values.html

Tuesday, May 6, 2008

What makes a Web 2.0

I recently heard that a business wanted a Web 2.0 Content Management System (CMS) with all the video and other media as publishable content. I asked, well, if its Web 2.0, what/who is the user-base. And, the answer was, that the content managers would publish content for public viewing.

That triggered my question, so what's Web 2.0 about it? Does the availablity of music / video / pictures make a site Web 2.0 or is it some set of standards that one has to meet?

Obviously, the term Web 2.0 has been around for a few years now. And there are many definitions of what make a site Web 2.0. So, one must ask, when a definition of Web 2.0 is desired, is it the original definition of what was intened by Web 2.0 or is it what is accepted as a norm (if there is any) now ?

A quick look on internet will show you that the term was first floated in the O'Reilly Media Web 2.0 conference in 2004. Tim O'Reilly defines Web 2.0 as

"Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them. (This is what I've elsewhere called "harnessing collective intelligence.")"(*1)

This is a pretty general definition, and it leaves room for some variance. Which I think is good, for something that is still in development or formation. Web 2.0 is still forming and changing as we speak, so to have a defintion that touches the essence but leaves the details out is a good thing.

But the its side-effect is that it is difficult for some to exactly pinpoint what makes a site Web 2.0.

I say, 'an' answer is in the original definition, "Chief among those rules is this: Build applications that harness network effects.."

Which I would like to interpret (and Yes, I can be wrong), as applications where the content / service is produced / used (read harnessed) by the user base (read the network).

So, lets take some examples.

  1. eBay
  2. faceBook
  3. myspace
  4. wikipedia
  5. flickr
  6. del.icio.us
  7. youtube

These sites are commonly given as examples of Web 2.0 sites. But what makes them a Web 2.0 site and not others, like

  1. NYTimes
  2. CNN
  3. Amazon

The question that should be asked is, Who generates the content and uses the services in these sites. And the answer is : People. They post stuff to sell on eBay, they post profiles in faceBook and MySpace, they post articles on Wikipedia, they upload pictures on Flickr, they mark bookmarks on Del.icio.us and they post videos on Youtube.. and the common purpose is to network and share, and use internet as a 'platform' to achieve that goal. The former set of sites are examples of Web 2.0 sites. And the 'harnessing of network effects' is what is common amongst all of them.

(*1) http://radar.oreilly.com/archives/2006/12/web-20-compact-definition-tryi.html