'분류 전체보기'에 해당되는 글 539건

  1. 2005.02.12 [javaworld] Event-driven services in SOA 1
  2. 2005.02.12 [javaworld] Reflective XML-RPC
  3. 2005.02.12 [onjava] Generating an XML Document with JAXB 1
  4. 2005.02.12 [onjava] The Hidden Gems of Jakarta Commons
  5. 2005.02.12 [onjava] Working with Hibernate in Eclipse 1
  6. 2005.02.12 [onjava] Caching Dynamic Content with JSP 2.0 1
  7. 2005.02.12 [onjava] Mock Objects in Unit Tests
  8. 2005.02.12 [onjava] Parsing an XML Document with XPath
  9. 2005.02.12 [onjava] Internals of Java Class Loading
  10. 2005.02.12 Object-Relational Mapping with SQLMaps
  11. 2005.02.11 About PostgreSQL
  12. 2005.02.11 J2EE 1.4 Glossary 1
  13. 2005.02.11 Enterprise JavaBeans (EJB) Architecture (White Papers)
  14. 2005.02.11 Java Servlet & JSP (White Papers)
  15. 2005.02.11 J2EE Connector Architecture(White Paper)
  16. 2005.02.11 J2EE v1.4 Specifications
  17. 2005.02.10 Version 1.5.0 or 5.0? 1
  18. 2005.02.10 JavaTM 2 Platform Standard Edition 5.0 Development Kit (JDK 5.0)
  19. 2004.10.04 Comparison of ebXML and RosettaNet
  20. 2004.10.04 mySAP Supply Chain Management
  21. 2004.10.04 SAP Business Maps (SCM)
  22. 2004.10.04 SAP NetWeaver 중 Open Standard에 대하여..
  23. 2004.09.23 Properties 에디터 (native2ascii 로 처리하던 문제 해결 방안) 1
  24. 2004.08.31 Loading Images Using getResource
  25. 2004.08.31 [eclipse] workspace 디렉토리 설정을 관리하는 파일
  26. 2004.08.13 [기타] 멋진 보고서 작성법
  27. 2004.07.30 [Tip] 별도의 환경설정 없이 Tomcat bin에서 바로 실행하기..
  28. 2004.07.16 [Tip] 윈도우 다른 위치로 재설치시의 데몬 위치 변경방법
  29. 2004.07.13 Oracle에서..MySQL의 desc와 같은 결과 가져오기
  30. 2004.07.13 오라클 특정 테이블의 PK 뽑아내는 쿼리






Event-driven services in SOA


Design an event-driven and service-oriented platform with Mule





Summary


Responding to real-time changes and events in a timely manner is becoming one of the most important requirements for an enterprise framework. This article discusses technologies and mechanisms that enable a service-oriented framework to effectively respond to real-time stimuli and, therefore, send and receive synchronous and asynchronous events across layers of an architectural stack without knowing the details of the underlying event system. (2,800 words; January 31, 2005)


By Jeff Hanson








Internet transactions, business-to-business systems, peer-to-peer processes, and real-time workflows are too dynamic and too complex to be modeled by traditional sequential-processing methods. Therefore, the need for more sophisticated asynchronous processing techniques is quickly becoming apparent. To address these unpredictable environments, the current trend in systems architecture is service-oriented design and event-driven programming.



A service-oriented architecture (SOA) presents a dynamic runtime environment, where loose couplings between service providers and/or service consumers enable powerful and flexible component interactions. Building a communication model to exploit this power and flexibility is a high priority for competitive software development. An event-driven communication model is able to respond better to real-time changes and stimuli than conventional request/reply mechanisms.



Service-oriented and event-driven architectures are natural fits for distributed systems since they share many of the same characteristics, such as modularity, loose-couplings, and adaptability.



In this article, I discuss the details of designing an effective event-driven and service-oriented platform using Mule, a lightweight event-messaging framework designed around an enterprise service bus (ESB) pattern. Components and applications can use Mule to communicate through a common messaging framework implemented transparently using Java Message Service (JMS) or another messaging technology.

Overview of service-oriented architecture

The term "service-oriented" has evolved to define an architecture where a service is a software component that embodies a core piece of an enterprise's business logic and features the following characteristics:



  • Loosely coupled: Services are not fundamentally bound with other components


  • Protocol-independent: Multiple protocols can transparently access a given service


  • Location-agnostic: A given service typically performs a composite form of business logic and returns the result in a single call


  • Coarse-grained: Services can be accessed in the same manner no matter their location


  • Maintains no user state



Services typically focus exclusively on solving business-domain problems.



Generally, service clients rely on configuration data, registries, and software factories to determine the location, protocol, and public interface for each service.



Applications are typically described by what they do, not necessarily by what they are or what they contain. For this reason, it's much more straightforward to describe an application publicly using verbs (services) as opposed to nouns (objects). Since objects define a thing and not an action, an impedance mismatch can occur when attempting to encapsulate what a component does as opposed to what a component is. In SOA, an application is described naturally, since each logical business operation of the application that can be verbalized is a likely candidate for a service. Therefore, SOA solves the impedance mismatch by allowing applications and components to access the functionality of services based on what the services do, i.e., what actions they perform. In turn, application developers can more easily match their needs with the appropriate services, since the interfaces for the services describe more completely the problems they solve.

Overview of event-driven architecture

An event-driven architecture (EDA) defines a methodology for designing and implementing applications and systems in which events transmit between loosely coupled software components and services. An event-driven system is typically comprised of event consumers and event producers. Event consumers subscribe to an intermediary event manager, and event producers publish to this manager. When the event manager receives an event from a producer, the manager forwards the event to the consumer. If the consumer is unavailable, the manager can store the event and try to forward it later. This method of event transmission is referred to in message-based systems as store and forward.



Building applications and systems around an event-driven architecture allows these applications and systems to be constructed in a manner that facilitates more responsiveness, since event-driven systems are, by design, more normalized to unpredictable and asynchronous environments.

Benefits of event-driven design and development

Event-driven design and development provide the following benefits:



  • Allows easier development and maintenance of large-scale, distributed applications and services involving unpredictable and/or asynchronous occurrences


  • Allows new and existing applications and services to be assembled, reassembled, and reconfigured easily and inexpensively


  • Promotes component and service reuse, therefore enabling a more agile and bug-free development environment


  • Short-term benefits: Allows easier customization because the design is more responsive to dynamic processes


  • Long-term benefits: Allows system and organizational health to become more accurate and synchronized closer to real-time changes



EDA and SOA together

Unlike a request/reply system, where callers must explicitly request information, an event-driven architecture (EDA) provides a mechanism for systems to respond dynamically as events occur. In an EDA, events are published by event producers, and event consumers receive events as they happen.



Business systems benefit from the features of both an SOA and an EDA, since an EDA can trigger event consumers as events happen and loosely coupled services can be quickly accessed and queried from those same consumers.



For systems to be most responsive, they must be able to quickly determine the necessary actions when events are triggered. To this end, events should be published and consumed across all boundaries of the SOA, including the layers of the architectural stack and across physical tiers.



Figure 1 illustrates possible events that can be triggered across layers of an architectural stack:








Figure 1: Event flow across architecture stack. Click on thumbnail to view full-sized image.



In the context of Figure 1, an event can be defined as any published change in a system, platform, component, business, or application process. Events can be high-level and business-oriented, or low-level and technical in character. Because events can be transmitted and received, event-aware applications and services can respond to the underlying changes as needed.

Event taxonomies and causality

The secret to understanding a given event is to know its cause at the time the event occurred, knowledge often referred to as event causality. Event causality is typically divided into two basic categories:



  • Horizontal causality: Both the event's source and cause reside on the same conceptual layer in the architectural stack


  • Vertical causality: Both the event's source and cause reside on different conceptual layers in the architectural stack



Vertical causality implies an event taxonomy that remains somewhat constant across different layers of a system, as illustrated by the following list:



  • Lifecycle events: Signify changes in an entity's lifecycle, such as stopping or starting a process


  • Execution events: Signify runtime occurrences, such as service or component invocations


  • Management events: Signify when thresholds have exceeded defined limits or ranges



Horizontal causality implies an event taxonomy that also remains somewhat constant across different layers of a system, as illustrated by the following list:



  • System-layer events: Signify system-level activities, such as the creation of a file or the closing of a port


  • Platform-layer events: Signify platform-level activities, such as the modification of a datasource or the addition of a new service


  • Component-layer events: Signify component-level activities, such as the transformation of a view object or a state-machine transition


  • Business-layer events: Signify business-level activities, such as the creation of a user or the removal of an account


  • Application-layer events: Signify application-level activities, such as a premium increase or a quote submission



The benefits of event-driven communication within an SOA are currently being realized by a number of ESB frameworks and platforms. One of the most promising of these within the Java development realm is Mule.

Introducing Mule

Mule is an open source ESB-messaging framework and message broker, loosely based on the staged event-driven architecture (SEDA). SEDA defines a highly concurrent enterprise platform in terms of stages (self-contained application components) connected by queues. Mule uses concepts from SEDA to increase the performance of processing events.



Mule provides support for asynchronous, synchronous, and request-response event processing using disparate technologies and transports such as JMS, HTTP, email, and XML-based Remote Procedure Call. Mule can be easily embedded into any application framework and explicitly supports the Spring framework. Mule also supports dynamic, declarative, content-based, and rule-based message routing. Mule facilitates declarative and programmatic transaction support, including XA transaction support. Mule provides a representational state transfer (REST) API to provide Web-based access to events.



The Mule ESB model drives all services in a system over a decoupled, message-communication backbone. Services registered with the bus have no knowledge of other registered services; therefore, each service is concerned with processing only the events it receives. Mule also decouples container, transport, and transformation details from the services, allowing any kind of object to be registered as a service on the bus.



I use the Mule framework to demonstrate the concepts and ideas discussed in this article.



The Mule architecture

The Mule architecture consists primarily of the following elements:

The Universal Message Object (UMO) API

The UMO API defines the services and interactions of objects to be managed by Mule.

UMO components

UMO components can be any component in the Mule system that receives, processes, and sends data as event messages.

Mule server

The Mule server component is a server application launched to bootstrap the Mule environment.

Descriptors

The descriptor components describe a Mule UMO's attributes. New Mule UMOs can be initialized as needed from their associated descriptor. A descriptor consists of:



  • The UMO component name

  • The UMO component version

  • The UMO component implementation class

  • An exception strategy

  • Inbound and outbound providers

  • Inbound and outbound routers

  • Interceptors

  • Receive and send endpoints

  • Inbound and outbound transformers

  • Miscellaneous properties

Connectors

Connectors are components that provide the implementation for connecting to an external system or protocol and managing the session for that system or protocol. A connector is responsible for sending data to an external message receiver and for managing the registration and deregistration of message receivers.

Providers

Providers are components that manage the sending, receiving, and transformation of event data to and from external systems. They enable connections to external systems or other components in Mule. A provider acts as a bridge from the external system into Mule and vice versa. It is, in fact, a composite of a set of objects used to connect to and communicate with the underlying system. The elements of a provider are:



  • Connector: Responsible for connecting to the underlying system


  • Message receiver: Used to receive events from the system


  • Connector dispatchers: Pass data to the system


  • Transformers: Convert data received from the system and data being sent to the system


  • Endpoint: Used as the channel address through which a connection is made


  • Transaction configuration: Used to define the connection's transactional properties

Endpoint resolvers

Endpoint resolvers determine what method to invoke on a UMO component when the component receives an event.

Transformers

Transformer components transform message or event payloads to and from different data formats. Transformers can be chained together to perform successive transforms on an event before an object receives it.

Message adapters

Message adapters provide a common manner in which to read disparate data from external systems.

Message receivers

Message receivers are listener-endpoint threads that receive data from an external system.

Message dispatchers

Message dispatchers send (synchronous) or dispatch (asynchronous) events to the underlying technology.

Message routers

Message routers are components that can be configured for a UMO component to route a message to deferent providers based on the message or other configuration.

Agents

Agents are components that bind to external services such as Java Management Extension servers.

Mule model

A Mule model encapsulates and manages the runtime behavior of a Mule server instance. A model consists of:



  • Descriptors

  • UMO components

  • An endpoint resolver

  • A lifecycle-adapter factory

  • A component resolver

  • A pool factory

  • An exception strategy

Mule manager

The Mule manager maintains and provides the following services:



  • Agents

  • Providers

  • Connectors

  • Endpoints

  • Transformers

  • The interceptor stack

  • A Mule model

  • A Mule server

  • The transaction manager

  • Application properties

  • The Mule configuration



The diagram in Figure 2 illustrates a high-level view of the message flow for the Mule architecture.








Figure 2: Mule high-level architecture. Click on thumbnail to view full-sized image.


Mule events

Mule events contain event data and properties examined and manipulated by event-aware components. The properties are arbitrary and can be set at any time after an event is created.



The org.mule.umo.UMOEvent class represents an event occurring in the Mule environment. All data sent or received within the Mule environment is passed between components as an instance of UMOEvent. The data in a Mule event can be accessed in its original format or in a transformed format. A Mule event uses a transformer associated with the provider that received the event to transform the event's payload into a format the receiving component understands.



The payload for a Mule event is contained within an instance of the org.mule.umo.UMOMessage interface. A UMOMessage instance is composed of the payload itself and its associated properties. This interface also acts as a common abstraction of different message implementations provided by different underlying technologies.



The org.mule.extras.client.MuleClient class defines a simple API that allows Mule clients to send and receive events to and from a Mule server. In most Mule applications, events are triggered by some external occurrence, such as a message received on a topic or a file being deleted from a directory.



The following illustrates how to send an event synchronously to another Mule component:




String componentName = "MyReceiver"; // The name of the receiving component.

String transformers = null; // A comma-separated list of transformers

                            // to apply to the result message.

String payload = "A test event"; // The payload of the event.

java.util.Map messageProperties = null; // Any properties to be associated

                                        // with the payload.

MuleClient client = new MuleClient();

UMOMessage message = client.sendDirect(componentName,

                                       transformers,

                                       payload,

                                       messageProperties);

System.out.println("Event result: " + message.getPayloadAsString());



An instance of MuleClient requires a server URL to define the endpoint for the remote Mule server to which the MuleClient instance will connect. The URL defines the protocol, the message's endpoint destination, and, optionally, the provider to use when dispatching the event. Endpoint examples are:



  • vm://com.jeffhanson.receivers.Default: Dispatches to a com.jeffhanson.receivers.Default destination using the virtual machine provider. The VM provider enables intra-VM event communication between components using transient or persistent queues.


  • jms://jmsProvider/accounts.topic: Dispatches a JMS message via the globally registered jmsProvider to a topic destination called accounts.topic.


  • jms://accounts.topic: Dispatches a JMS message via the first (default) JMS provider.



Mule event processing

Mule can send and receive events in three different ways:



  1. Asynchronously: A given component can simultaneously process multiple events sent and received by different threads.


  2. Synchronously: A single event must complete processing before a component can resume execution. In other words, a component that produces an event sends the event and then blocks until the call returns, thereby allowing only one event at a time to be processed.


  3. Request-response: A component specifically requests an event and waits for a specified time to receive a response.



The org.mule.impl.MuleComponent implementation class provides a concrete component class that includes all the functionality needed to send and receive data and create events.



Objects that execute synchronously are encouraged to implement the org.mule.umo.lifecycle.Callable interface, which defines a single method, Object onCall(UMOEventContext eventContext). The Callable interface provides UMOs with an interface that supports event calls. Although not mandatory, the interface provides a lifecycle method that executes when the implementing component receives an event. The following illustrates a simple implementation of this interface:




import org.mule.umo.lifecycle.Callable;



public class EchoComponent

   implements Callable

{

    public Object onCall(UMOEventContext context) throws Exception

    {

        String msg = context.getMessageAsString();

        // Print message to System.out

        System.out.println("Received synchronous message: " + msg);

        // Echo transformed message back to sender

        return context.getTransformedMessage();

    }

}



The object returned from the onCall() method can be anything. When the UMOLifecycleAdapter for the component receives this object, it will first see if the object is a UMOMessage; if the object is neither a UMOMessage nor null, a new message will be created using the returned object as the payload. This new event will then publish via the configured outbound router, if one has been configured for the UMO and the setStopFurtherProcessing(true) method wasn't called on the UMOEventContext instance.

A simple event framework using Mule

Let's put the pieces of Mule together to construct a simple event framework. The framework consists of an event manager responsible for registering and deregistering services that can receive events, and for synchronously and asynchronously routing messages to these services.



The Mule "vm" protocol requires that a configuration file be located at a directory named META-INF/services/org/mule/providers/vm, relative to the event manager's working directory. This file defines numerous components for the protocol, such as the connector and dispatcher factory. The file's contents are as follows:




connector=org.mule.providers.vm.VMConnector

dispatcher.factory=org.mule.providers.vm.VMMessageDispatcherFactory

message.receiver=org.mule.providers.vm.VMMessageReceiver

message.adapter=org.mule.providers.vm.VMMessageAdapter

endpoint.builder=org.mule.impl.endpoint.ResourceNameEndpointBuilder



A simple interface defines the event manager's public view:




package com.jeffhanson.mule;



import org.mule.umo.FutureMessageResult;



public interface EventManager

{

   /**

    * Sends an event message synchronously to a given service.

    *

    * @param serviceName    The name of the service to which the event

    *                       message is to be sent.

    * @param payload        The content of the event message.

    * @return Object        The result, if any.

    * @throws EventException on error

    */

   public Object sendSynchronousEvent(String serviceName,

                                      Object payload)

      throws EventException;



   /**

    * Sends an event message asynchronously to a given service.

    *

    * @param serviceName    The name of the service to which the event

    *                       message is to be sent.

    * @param payload        The content of the event message.

    * @return FutureMessageResult The result, if any.

    * @throws EventException on error

    */

   public FutureMessageResult sendAsynchronousEvent(String serviceName,

                                                    Object payload)

      throws EventException;



   /**

    * Starts this event manager.

    */

   public void start();



   /**

    * Stops this event manager.

    */

   public void stop();



   /**

    * Retrieves the protocol this event manager uses.

    * @return

    */

   public String getProtocol();



   /**

    * Registers a service to receive event messages.

    *

    * @param serviceName      The name to associate with the service.

    * @param implementation   Either a container reference to the service

    *                         or a fully-qualified class name.

    */

   public void registerService(String serviceName,

                               String implementation)

      throws EventException;



   /**

    * Unregisters a service from receiving event messages.

    *

    * @param serviceName  The name associated with the service to unregister.

    */

   public void unregisterService(String serviceName)

      throws EventException;

}



The event-manager implementation class is encapsulated within a factory class, thereby allowing the implementation to change as needed without affecting the event manager's clients. The event-manager implementation is shown below:




package com.jeffhanson.mule;



import org.mule.umo.*;

import org.mule.extras.client.MuleClient;

import org.mule.impl.endpoint.MuleEndpoint;

import org.mule.config.QuickConfigurationBuilder;



import java.util.HashMap;

import java.util.Map;



public class EventManagerFactory

{

   private static HashMap instances = new HashMap();





   /**

    * Retrieves the event manager instance for a given protocol.

    *

    * @param protocol      The protocol to use.

    * @return EventManager The event manager instance.

    */

   public static EventManager getInstance(String protocol)

   {

      EventManager instance = (EventManager)instances.get(protocol);



      if (instance == null)

      {

         instance = new EventManagerImpl(protocol);

         instances.put(protocol, instance);

      }



      return instance;

   }



   /**

    * A concrete implementation for a simple event manager.

    */

   private static class EventManagerImpl

      implements EventManager

   {

      private UMOManager manager = null;

      private QuickConfigurationBuilder builder = null;

      private MuleClient eventClient = null;

      private String protocol = null;

      private MuleEndpoint receiveEndpoint = null;

      private MuleEndpoint sendEndpoint = null;



      private EventManagerImpl(String protocol)

      {

         this.protocol = protocol;

      }



      /**

       * Starts this event manager.

       */

      public void start()

      {

         try

         {

            builder = new QuickConfigurationBuilder();

            manager = builder.createStartedManager(true,

                                                   protocol + "tmp/events");

            eventClient = new MuleClient();

            receiveEndpoint = new MuleEndpoint(protocol

                                               + "tmp/events/receive");

            sendEndpoint = new MuleEndpoint(protocol + "tmp/events/send");

         }

         catch (UMOException e)

         {

            System.err.println(e);

         }

      }



      /**

       * Stops this event manager.

       */

      public void stop()

      {

         try

         {

            manager.stop();

         }

         catch (UMOException e)

         {

            System.err.println(e);

         }

      }



      /**

       * Retrieves the protocol this event manager uses.

       * @return

       */

      public String getProtocol()

      {

         return protocol;

      }



      /**

       * Registers a service to receive event messages.

       *

       * @param serviceName      The name to associate with the service.

       * @param implementation   Either a container reference to the service

       *                         or a fully-qualified class name

       *                         to use as the component implementation.

       */

      public void registerService(String serviceName,

                                  String implementation)

         throws EventException

      {

         if (!manager.getModel().isComponentRegistered(serviceName))

         {

            try

            {

               builder.registerComponent(implementation,

                                         serviceName,

                                         receiveEndpoint,

                                         sendEndpoint);

            }

            catch (UMOException e)

            {

               throw new EventException(e.toString());

            }

         }

      }



      /**

       * Unregisters a service from receiving event messages.

       *

       * @param serviceName  The name associated with the service to unregister.

       */

      public void unregisterService(String serviceName)

         throws EventException

      {

         try

         {

            builder.unregisterComponent(serviceName);

         }

         catch (UMOException e)

         {

            throw new EventException(e.toString());

         }

      }



      /**

       * Sends an event message synchronously to a given service.

       *

       * @param serviceName    The name of the service to which the event

       *                       message is to be sent.

       * @param payload        The content of the event message

       * @return Object        The result, if any.

       * @throws EventException on error

       */

      public Object sendSynchronousEvent(String serviceName,

                                         Object payload)

         throws EventException

      {

         try

         {

            if (!manager.getModel().isComponentRegistered(serviceName))

            {

               throw new EventException("Service: " + serviceName

                                        + " is not registered.");

            }



            String transformers = null;

            Map messageProperties = null;

            UMOMessage result = eventClient.sendDirect(serviceName,

                                                       transformers,

                                                       payload,

                                                       messageProperties);

            if (result == null)

            {

               return null;

            }

            return result.getPayload();

         }

         catch (UMOException e)

         {

            throw new EventException(e.toString());

         }

         catch (Exception e)

         {

            throw new EventException(e.toString());

         }

      }



      /**

       * Sends an event message asynchronously.

       *

       * @param serviceName    The name of the service to which the event

       *                       message is to be sent.

       * @param payload        The content of the event message.

       * @return FutureMessageResult The result, if any

       * @throws EventException on error

       */

      public FutureMessageResult sendAsynchronousEvent(String serviceName,

                                                       Object payload)

         throws EventException

      {

         FutureMessageResult result = null;



         try

         {

            if (!manager.getModel().isComponentRegistered(serviceName))

            {

               throw new EventException("Service: " + serviceName

                                        + " is not registered.");

            }



            String transformers = null;

            Map messageProperties = null;

            result = eventClient.sendDirectAsync(serviceName,

                                                 transformers,

                                                 payload,

                                                 messageProperties);

         }

         catch (UMOException e)

         {

            throw new EventException(e.toString());

         }



         return result;

      }

   }

}



The Mule framework dispatches messages by the payload's data type. The event framework can exploit this payload-based dispatching mechanism by defining generic event methods to act as event receivers in the services registered with the event manager. The following class defines one of these services with three overloaded event methods named receiveEvent():




package com.jeffhanson.mule;



import java.util.Date;



public class TestService

{

   public void receiveEvent(String eventMessage)

   {

      System.out.println("\n\nTestService.receiveEvent(String) received "

                         + "event message:  " + eventMessage + "\n\n");

   }



   public void receiveEvent(Integer eventMessage)

   {

      System.out.println("\n\nTestService.receiveEvent(Integer) received "

                         +"event message:  " + eventMessage + "\n\n");

   }



   public void receiveEvent(Date eventMessage)

   {

      System.out.println("\n\nTestService.receiveEvent(Date) received "

                         + "event message:  " + eventMessage + "\n\n");

   }

}



The event manager's client application sends three events to the test service to test each receiveEvent() method. The client application follows:




package com.jeffhanson.mule;



import org.apache.log4j.Logger;

import org.apache.log4j.Level;

import org.apache.log4j.BasicConfigurator;



import java.util.Date;



public class EventClient

{

   static Logger logger = Logger.getLogger(EventClient.class);



   public static void main(String[] args)

   {

      // Set up a simple configuration that logs on the console.

      BasicConfigurator.configure();

      logger.setLevel(Level.ALL);



      try

      {

         EventManager eventManager =

            EventManagerFactory.getInstance("vm://");

         eventManager.start();



         String serviceName = TestService.class.getName();

         String implementation = serviceName;



         eventManager.registerService(serviceName, implementation);



         Object result =

            eventManager.sendSynchronousEvent(serviceName, "A test message");



         if (result != null)

         {

            System.out.println("Event result: " + result.toString());

         }



         result =

            eventManager.sendSynchronousEvent(serviceName, new Integer(23456));



         if (result != null)

         {

            System.out.println("Event result: " + result.toString());

         }



         result =

            eventManager.sendSynchronousEvent(serviceName, new Date());



         if (result != null)

         {

            System.out.println("Event result: " + result.toString());

         }



         eventManager.stop();

      }

      catch (EventException e)

      {

         System.err.println(e.toString());

      }

   }

}



The Mule platform's simplifications and abstractions that the preceding framework provides enable you to send and receive synchronous and asynchronous events across layers of an architectural stack without knowing the details of the underlying event system. The use of the Factory pattern and SOA principles are exploited to facilitate a loosely-coupled and extensible design.

Summary

Designing an effective event-driven software system can grow complex when services and processes need to interact across multiple tiers and protocols. However, a service-oriented architecture built around a properly designed event-management layer using standard industry patterns can reduce or even eliminate these problems.



The Mule platform provides APIs, components, and abstractions that can be used to build a powerful, robust, event-driven system that is scalable and highly-maintainable.



About the author

Jeff Hanson has more than 18 years of experience in the software industry. He has worked as a senior engineer for the Windows OpenDoc project and as lead architect for the Route 66 framework at Novell. He is currently the chief architect for eReinsure.com, building Web services frameworks and platforms for J2EE-based reinsurance systems. Hanson has also authored numerous articles and books including Pro JMX: Java Management Extensions (Apress, November 2003; ISBN: 1590591011) and Web Services Business Strategies and Architectures (Wrox Press, August 2002; ISBN: 1904284132).





Posted by 아름프로






Reflective XML-RPC


Dynamically invoke XML-based Remote Procedure Call





Summary


Java reflection offers a simple but effective way of hiding some of the complexity of remote procedure calls with XML-RPC (XML-based Remote Procedure Call). In this article, Stephan Maier shows how to wrap XML-RPC calls to a remote interface using the gadgets from the Reflection kit: The Proxy, the Array, and BeanInfo classes. The article will also discuss various ramifications of the approach and the use of reflective methods in RMI (Remote Method Invocation). (3,800 words; February 7, 2005)


By Stephan Maier








XML-based Remote Procedure Call (XML-RPC) receives occasional attention as a simple protocol for remote procedure calls. It is straightforward to use, and easily available implementations such as Apache XML-RPC facilitate the protocol's use.



If your application is small or uses a limited number of remote procedures, you might prefer not to formally define the names of remote procedures and their signatures, but instead use XML-RPC in a straightforward way. Yet, if your application grows and the number of remote interfaces increases, you might find that the necessary conventions—remote methods and data objects—must be somehow fixed. In this article, I show how Java provides all you need to define remote interfaces and access remote methods: procedures and their signatures can be defined via Java interfaces, and remote procedure calls with XML-RPC can be wrapped such that both sides of a communication channel see only interfaces and suitable data objects.



This article also shows that when given Java interfaces describing the remote procedures and datastructures conforming to the JavaBeans specification, you can use the power of Java reflection as incorporated into the Reflection and JavaBeans packages to invoke remote methods transparently and convert between the data types of XML-RPC and Java with surprising ease.



Hiding complexity is good practice in itself. Needless to say, not all complexity can and should be hidden. With respect to distributed computing, this point has been famously made by Jim Waldo et al. in "A Note on Distributed Computing" (Sun Microsystems, November 1994). The framework presented here does not intend to hide the complexity of distributed computing, but it promises to reduce the pains involved in calling a remote procedure. For simplicity, I discuss only concurrent remote procedure calls and leave the asynchronous case to the zealous reader.



XML-RPC can be viewed as an oversimplification of RPC via SOAP. And by extension, the simple framework I discuss here must be regarded as a simplistic version of a SOAP engine, such as Axis. This article's main purpose is educational: I wish to show how reflection is employed to build a simple XML-RPC engine on top of existing XML-RPC frameworks. This may help you understand the inner workings of similar but vastly more complex engines for other protocols or how to apply reflection to solve different problems. A simple RPC engine can be used where a SOAP engine is clearly not feasible, such as with small applications that are not exposed via a Web server and where other forms of middleware are unavailable. Roy Miller's "XML-RPC in Java Programming" (developerWorks, January 2004) explains a useful example.



In this article, we use the Apache implementation of XML-RPC (Apache XML-RPC) to set up our framework. You do not need to know XML-RPC, nor do you need to understand the Apache XML-RPC framework, even though a basic understanding will help you appreciate what follows. This article focuses on the framework's precise inner workings, but does not make use of the protocol's details.

Avoiding conventions

Occasionally, I prefer unconventional programming. Having said this, I must immediately assure you that I am no iconoclast and do not reject good programming habits; quite the contrary. The word unconventional here means that I like to avoid conventions expressed in terms of strings scattered throughout the code that could also be defined via a programmatic API. Consider the following piece of code:



Listing 1. Invoking a remote procedure call




Vector paras = new Vector();

paras.add("Herbert");

Object result = client.execute("app.PersonHome.getName", paras);



Listing 1 illustrates how a remote procedure might be called using the Apache XML-RPC implementation. Observe that we need to know both the name of the procedure and the parameters we are allowed to pass to the method. We must also know the object type returned to us by the remote procedure call. Unless you have the implementation class available to check whether you have all the names (app.PersonHome and getName) and parameters right, you will need to look up these names and signatures, usually in some text file or some constant interface (an interface that provides constants for all required names). A suitably placed Javadoc might also be used. Observe that this sort of convention is rather error-prone because errors will show up only at runtime, not at compile time.



Now, in contrast, consider the following piece of code:



Listing 2. Invoking a remote procedure call




Person person = ((PersonHome)Invocator.getProxy(PersonHome.class)).getPerson("Herbert");



Here, we call a static method getProxy() on the class Invocator to retrieve an implementation of the interface PersonHome. On this interface, we can call the method getPerson() and, as a result, obtain a Person object.



Listing 2's code is much more economical than the code in Listing 1. In Listing 2, we can use a method defined on an interface, which neatly defines the available methods, their signatures, and the return types all in one place. Type-safety comes along free of charge, and the code is more readable because it is freed from redundant constructs such as the Vector class.



Furthermore, if you are using a sufficiently powerful IDE, code completion will list all available methods on PersonHome together with their signatures. Thus, we get IDE programming support on top of a type-safe remote method call.



I must admit that we cannot do without conventions. The one convention we must keep (unless we are prepared to accept considerable overhead and complications) is the assumption that all data objects conform to the JavaBeans specification. Simply stated, this means that object properties are exposed via getter/setter method pairs. This assumption's importance will become clear when I talk about converting XML-RPC datastructures into Java objects.



The demand for all data objects to be JavaBeans is a convention far superior to the conventions used in a straightforward XML-RPC application because it is a general convention. It is also a convention natural for all Java programmers. Towards the end of the article, I discuss XML-RPC's limitations and suggest other useful conventions that can help you live with those limitations.



The following sections walk you through an implementation of the Invocator class and a suitable version of a local server that provides the other end of our framework's communication channel.

Implementing Invocations

Let's first look at the method that provides an interface's implementation:



Listing 3. Creating the proxy




public static Object getProxy(Class ifType) {

   if (!ifType.isInterface()) {

      throw new AssertionError("Type must be an interface");

   }

   return Proxy.newProxyInstance(Invocator.class.getClassLoader(),

      new Class[]{ifType}, new XMLRPCInvocationHandler(ifType));

}



The magic is hidden in a simple call to the method Proxy.newProxyInstance(). The class Proxy has been part of the Java Reflection package since Java 1.3. Via its method newProxyInstance(), a collection of interfaces can be implemented dynamically. Of course, the created proxy object does not know how to handle method invocations. Thus, it must pass invocations to a suitable handler—a task for the implementation of the java.lang.reflect.InvocationHandler interface. Here, I have chosen to call this implementation XMLRPCInvocationHandler. The InvocationHandler interface defines a single method, as shown in Listing 4.



Listing 4. InvocationHandler




public interface InvocationHandler {

   public Object invoke(Object proxy, Method method, Object[] args) throws Throwable;

}



When a method is invoked on a proxy instance, the proxy passes that method and its parameters to the handler's invoke() method, while simultaneously identifying itself. Let's now look at our handler's implementation:



Listing 5. InvocationHandler




private static class XMLRPCInvocationHandler implements InvocationHandler {



   private Class type;



   public XMLRPCInvocationHandler(Class ifType) {

      this.type = ifType;

   }



   public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {



      XmlRpcClient client = getClient(); // Get a reference to the client

      Vector paras = null; // This will hold the list of parameters

      if (args != null){

         paras = new Vector();

         for (int i = 0; i < args.length; i++) {

               paras.add(ValueConverter.convertFromType(args[i]));

            }

         }

         else{

            paras = new Vector(); // The vector holding the parameters must not be null

         }

         Class retType = method.getReturnType();

         Object ret = client.execute(type.getName() + '.' + method.getName(), paras);

         return ValueConverter.convertToType(ret, retType);

   }

}



On creation, an instance of XMLRPCInvocationHandler is given the class that defines the remote interface. We use this class only to get the remote interface's name, which, together with the method name available on method invocation, is part of the remote request. Observe that the remote method invocation is thus totally dynamic: we need neither invoke methods on a stub class nor require any knowledge from outside the interface.



The client is obtained from the method getClient():



Listing 6. Getting the client




protected static XmlRpcClient getClient() throws MalformedURLException {

   return new XmlRpcClient("localhost", 8080);

}



Here, we are able to use Apache XML-RPC to get a client that handles the remote call for us. Observe that we return a client without consideration of the interface on which the method has been invoked. Needless to say, we could add considerable flexibility by allowing different service endpoints that depend on the interface.



The more important code for our present purposes is represented by the static methods invoked on the class ValueConverter. It is in these methods where reflection does its magic. We look at that code in the following section.

Converting from XML-RPC to Java and back

This section explains the core of our XML-RPC framework. The framework needs to do two things: It needs to convert a Java object into a datastructure understood by XML-RPC, and it needs to perform the reverse process of converting an XML-RPC datastructure into a Java object.



I start by showing how to convert a Java object into a datastructure understood by XML-RPC:



Listing 7. Java to XML-RPC




public static Object convertFromType(Object obj) throws IllegalArgumentException,

      IllegalAccessException, InvocationTargetException, IntrospectionException {

   if (obj == null) {

      return null;



   }

   Class type = obj.getClass();

   if (type.equals(Integer.class)

      || type.equals(Double.class)

      || type.equals(Boolean.class)

      || type.equals(String.class)

      || type.equals(Date.class)) {

      return obj;

   else if (type.isArray() && type.getComponentType().equals(byte.class)) {

      return obj;

   }

   else if (type.isArray()) {

      int length = Array.getLength(obj);

      Vector res = new Vector();

      for (int i = 0; i < length; i++) {

         res.add(convertFromType(Array.get(obj, i)));

      }

      return res;

   }

   else {

      Hashtable res = new Hashtable();

      BeanInfo info = Introspector.getBeanInfo(type, Object.class);

      PropertyDescriptor[] props = info.getPropertyDescriptors();

      for (int i = 0; i < props.length; i++) {

         String propName = props[i].getName();

         Object value = null;

         value = convertFromType(props[i].getReadMethod().invoke(obj, null));

         if (value != null) res.put(propName, value);

      }

      return res;

   }

}




To convert a Java object into a datastructure understood by XML-RPC, we must consider five cases, which are illustrated in the listing above:



  1. Null: If the object we need to convert is null, we just return null.


  2. Primitive type: If the object is of one of the primitive types (or their wrapping types)—int, double, Boolean, string, or date—then we can return the object itself, as XML-RPC understands these primitive types.


  3. base64: If the object is a byte array, it is understood to represent an instance of the base64 type. Again, we may simply return the array itself.


  4. Array: If the object is an array but not a byte array, we can use the utility class Array, which comes with the Java Reflection package to first find the length of the array. We then use this length to loop over the array and, again, using the Array utility, access the individual fields. Each array item is passed to the ValueConverter, and the result is inserted into a vector. This vector represents the array to Apache XML-RPC.


  5. Complex types: If the object is none of the above, we can assume it is a JavaBean, a basic assumption fundamental to the entire construction and the one convention we agreed on at the outset. We insert its attributes into a hashtable. To access the attributes, we use the introspective power of the JavaBeans framework: we use the utility class Introspector to get the bean information that comes encapsulated in a BeanInfo object. In particular, we can loop over the bean's properties by accessing the array of PropertyDescriptor objects. From such a property descriptor, we retrieve the name of the property that will be the key into the hashtable. We get this key's value, i.e., the property value, by using the read method on the property descriptor.



Observe how easy it is to extract information from a bean with the JavaBeans framework. We need to know nothing about the type we want to convert, only that it is a bean. This assumption then is a necessary prerequisite for our framework to function faultlessly.



Let's now turn to the opposite transformation from XML-RPC structures to Java objects:



Listing 8. XML-RPC to Java




public static Object convertToType(Object object, Class type) throws IllegalArgumentException,

      IllegalAccessException, InvocationTargetException, IntrospectionException, InstantiationException {

   if (type.equals(int.class)

      || type.equals(double.class)

      || type.equals(boolean.class)

      || type.equals(String.class)

      || type.equals(Date.class)) {

      return object;

   }

   else if (type.isArray() && type.getComponentType().equals(byte.class)) {

      return object;

   }

   else if (type.isArray()) {

      int length = ((Vector) object).size();

      Class compType = type.getComponentType();

      Object res = Array.newInstance(compType, length);

      for (int i = 0; i < length; i++) {

         Object value = ((Vector) object).get(i);

         Array.set(res, i, convertToType(value, compType));

      }

      return res;

   }

   else {

      Object res = type.newInstance();

      BeanInfo info = Introspector.getBeanInfo(type, Object.class);

      PropertyDescriptor[] props = info.getPropertyDescriptors();

      for (int i = 0; i < props.length; i++) {

         String propName = props[i].getName();

         if (((Hashtable) object).containsKey(propName)) {

            Class propType = props[i].getPropertyType();

            props[i].getWriteMethod().

               invoke(res, new Object[]

                  { convertToType(((Hashtable) object).get(propName), propType)});


         }

      }

      return res;

   }





}



Converting to a Java type requires more knowledge than just the value that we wish to convert—we must also know which type to convert it to. This explains the second parameter in Listing 8's convertToType() method. Given the type's knowledge, we use the introspective power of Java to transform XML-RPC data types into Java types. The following list shows how conversion is completed for the various data types:



  1. Null: XML-RPC does not transmit null values, a limitation I discuss in more detail later. We need not consider this case.


  2. Primitive type: If the object is of one of the primitive types (or their wrapping types)—int, double, Boolean, string, or date—then we can return the object itself as XML-RPC understands these primitive types.


  3. base64: If the object is a byte array, it is understood to represent an instance of the base64 type. We again may simply return the array itself.


  4. Array: If the object is an array but not a byte array, we first discover the type of items in the array. We can figure out this type based on the object itself, provided it is an array. We use the method getComponentType(). Next, we use the utility class Array to create a new array with the given component type. Then we loop over the array and, using the Array utility again, set the individual fields, using the ValueConverter to get the right values for each array item. Observe that the datastructure we expect from the XML-RPC framework in the case of an array is a Vector.


  5. Complex types: If the object is none of the above, we can assume it is a JavaBean (by our basic convention). Again, we use the Introspector to find the bean's property descriptors and, using the property descriptor, set the actual properties by accessing the write() method. Note that the framework hands us the properties stored in a hashtable. Of course, as the property's type may be complex, we must use the ValueConverter to obtain the correct Java object.



Armed with this understanding of data conversion, we can now look at how service handling is implemented.

Implementing service handling

Having explained how a remote service is invoked and what is involved in transforming between XML-RPC and Java, I now sketch the last piece of the puzzle: how to handle a request at a service endpoint.



Here is the complete code of the simple server I have implemented for this article's purpose:



Listing 9. Server




public class Server {

   private WebServer webserver = null;



   public void start() {

      webserver = new WebServer(8080);

      webserver.addHandler

          (PersonHome.class.getName(),

         new Handler(PersonHome.class,

         new PersonHomeImpl()));


      webserver.setParanoid(false);

      webserver.start();

   }



   public void stop() {

         webserver.shutdown();

         webserver = null;

   }



   private static class Handler implements XmlRpcHandler {

      private Object instance;

      private Class type;



      public Handler(Class ifType, Object impl) {

         if (!ifType.isInterface()) {

            throw new AssertionError("Type must be an interface");

         }

         if (!ifType.isAssignableFrom(impl.getClass())) {

            throw new AssertionError("Handler must implement interface");

         }

         this.type = ifType;

         this.instance = impl;

      }



      public Object execute(String method, Vector arguments) throws Exception {

         String mName = method.substring(method.lastIndexOf('.') + 1);

         Method[] methods = type.getMethods();

         for (int i = 0; i < methods.length; i++) {

            if (methods[i].getName().equals(mName)){

               try {

                  Object[] args = new Object[arguments.size()];

                  for (int j = 0; j < args.length; j++) {

                     args[j] = ValueConverter.convertToType

                         (arguments.get(j), methods[i].getParameterTypes()[j]);



                  }

                  return ValueConverter.convertFromType(methods[i].invoke(instance,args));

               }

               catch (Exception e) {

                  if (e.getCause() instanceof XmlRpcException){

                     throw (XmlRpcException)e.getCause();

                  }

                  else{

                     throw new XmlRpcException(-1, e.getMessage());

                  }

               }

            }

         }

         throw new NoSuchMethodException(mName);

      }

   }



   public static void main(String[] args){

      Server server = new Server();

      System.out.println("Starting server...");

      server.start();

      try {

         Thread.sleep(30000);

      }

      catch (InterruptedException e) {

         e.printStackTrace();      

      }

      System.out.println("Stopping server...");

      server.stop();

   }

}



The key player is the class WebServer, which is from the Apache XML-RPC package. The code in boldface shows our main requirements: we must register a service handler. Such a handler is defined via a simple interface XmlRpcHandler, which, just like the proxy mechanism's InvocationHandler interface, has a method to which method invocation is delegated. Here, it is called execute(), with an implementation the same in spirit as InvocationHandler's. The most notable difference is that we need to register a handler that holds both the interface and its implementation. In the InvocationHandler implementation above, we do not need to provide an implementation of the service interface (in the form of a stub). In the server, however, we need to define which code is responsible for handling incoming requests. Finally, observe that we use the usual approach when invoking the service method by looping through the interface's methods to find the right method. Here, we cannot rely on standard introspection into JavaBeans because service methods are not likely to be mere setters and getters.



Ramifications

In this section, I briefly discuss a few ramifications that arise from the preceding discussion. I look at limitations of both the XML-RPC protocol and this article's framework, but I also consider opportunities introduced by this approach.

Limitations

XML-RPC is a simple protocol and obviously cannot implement programmatic APIs for remote procedure calls that feature all aspects of an object-oriented system. Notably, such an API will not support the following:



  • Inheritance: XML-RPC is not sufficiently rich to carry information that would determine which type along an inheritance hierarchy is intended. This is true both for the interfaces on which remote calls are invoked and for the objects passed as parameters. Therefore, declaring all classes involved as final is a good practice.


  • Overloading: XML-RPC does not allow method overloading. In principle, it is possible to overload methods that have only primitive types in their signatures, but naturally this option is not enough. As we need to infer the structure's type from a method's signature, we cannot allow overloading. We could only safely allow the use of the same name for methods with different numbers of parameters because all parameters are always available during a remote method invocation. I have not implemented this option, preferring instead to use different method names. Note that Web services don't offer much more in this respect. Even flexible frameworks like Axis have limitations regarding overloading.


  • Collections: XML-RPC does not allow collections. As with overloading, we must infer the type of items in a collection from a given collection type, which is not possible (before Java 1.5). Instead, we use arrays, which we can query for their component types. Though Web services are more powerful than XML-RPC for remote method invocation, many advise against using collections, see "Web Services Programming Tips and Tricks: Use Collection Types with SOAP and JAX-RPC" by Russell Butek and Richard Scheuerle, Jr. (developerWorks, April 2002).


  • Null values: XML-RPC does not support the value null. This is perhaps the protocol's most disconcerting drawback because it means we cannot have null values in arrays. A proposal exists for including null values in XML-RPC, but most implementations don't support it. Needless to say, if the processes on both sides of a communication link talk Java, some of these problems might be overcome by artificially inserting metadata into the messages. This, however, means abusing the protocol, which is never a good idea.

Controlling serialization

Serialization is a process that happens behind the scenes. In particular, the framework proposed in this article finds properties to serialize automatically. Sometimes, however, you might wish to prevent certain properties from being serialized.



Suppose a Person object has a reference to various Address objects that differ in type. In particular, one of those addresses might be the mailing address, while others are significant in other contexts. You might wish to enhance your Person class with the Person.getMailingAddress() method, which returns the mailing address. Standard introspection will then see a new property, namely mailingAddress, and this property will be written during serialization with the entire list of addresses. In the best case, a corresponding Person.setMailingAddress() method will be written such that, regardless of the addresses' serialization order, the deserialization process will return an object identical to the one serialized. Of course, your methods should be written such that the serialization order does not matter, but even if you did write your methods correctly, somebody at the other end (who might be using a different language) might be unaware of your thinking, increasing the potential for problems. In any case, you would accept the overhead of serializing the mailing address twice.



But there is help. The Introspector can be told not to use reflection when looking for a class's properties and instead use given information. This information is found in a BeanInfo class, which must be named MyClassBeanInfo, if your class is called MyClass. This BeanInfo class must be either in the same package as MyClass or in one of the packages listed in BeanInfo's search path. This path can be set in the Introspector itself. When providing a BeanInfo class, you will usually just wish to offer the properties as follows:



Listing 10. BeanInfo Example 1




public class MyClassBeanInfo extends SimpleBeanInfo {

   public PropertyDescriptor[] getPropertyDescriptors() {

      try {



         BeanInfo superInfo = Introspector.getBeanInfo(MyClass.class.getSuperclass());

         List list = new ArrayList();

         for (int i = 0; i < superInfo.getPropertyDescriptors().length; i++) {

            list.add(superInfo.getPropertyDescriptors()[i]);

         }

         //

         list.add(new PropertyDescriptor("myProperty", MyClass.class));

         //

         return (PropertyDescriptor[])list.toArray(new PropertyDescriptor[list.size()]);

      } catch (IntrospectionException e) {

         return null;

      }

   }

}




The method getPropertyDescriptors() must return the properties represented by property descriptors. First, add the properties of your class's superclass, then add the properties you wish to expose to your class, as shown in the bold section.



There is a serious drawback here: the above proposal implies a lot of hard coding, which you ideally want to avoid. More precisely, adding all the properties that should be serialized is probably more work than listing those that should not be considered. Of course, one approach is to use the Introspector to first get all properties via reflection by calling Introspector.getBeanInfo(MyClass.class, Introspector.IGNORE_ALL_BEANINFO). Then you apply a filter to the result you return. This approach might look like this:



Listing 11. BeanInfo Example 2




public class MyClassBeanInfo extends SimpleBeanInfo {

   public PropertyDescriptor[] getPropertyDescriptors() {

      try {

            BeanInfo infoByReflection = Introspector.getBeanInfo(MyClass.class,

            Introspector.IGNORE_ALL_BEANINFO); PropetyDescriptor allProperies =

            infoByReflection.getPropertyDescriptors();

            return filter(allProperies);

      } catch (IntrospectionException e) {

         return null;



      }

   }



   protected PropertyDescriptor[] filter(PropertyDescriptor[] props){

      // Remove properties which must not be exposed

   }

}



A better way is to build a framework on some form of interface definition language (IDL), which allows you to generate beans and extend the properties and methods by hand if you need to. The generator will be responsible for providing BeanInfo classes that filter out just the properties defined in the IDL. Continue reading for an example of such a language.

Adding value

As we have hidden the actual transport mechanism, it is easy to add information to messages sent and received. Suppose we are required to pass session information with each remote method invocation. This information could be added in the invocator and the handler as a first argument (wrapping all necessary information into a suitable bean). At the other end, this information would be removed from the vector of parameters and handled separately from the method invocation. Extending the code available from Resources in this direction may be a useful way to play around with the framework.

Other languages

Weaknesses can be considered strengths, provided you look at them correctly. XML-RPC's simplicity leads to the limitations described above. However, XML-RPC implementations are now available for many languages such as Ruby, Python, or functional languages such as Haskell. Not all of these languages support inheritance as understood in object-oriented languages, and not all allow method overloading. Some languages, such as Haskell, have flexible list types, which, from a Java perspective, fall somewhere between arrays and lists. Hence, the inherent limitations of XML-RPC make it a suitable candidate for communication across language boundaries.



When XML-RPC is chosen for bridging the gap between Java and some other language, you can still use the framework presented here, but you will be able to use it only for the Java side of the communication channel. However, you could extend the framework to cover other languages. For instance, you could rewrite the framework in another language and then add support for the transformation of Java interfaces and data objects into corresponding objects in the other language. Another approach, which I have already hinted at above, is to write a compiler that turns a suitable form of IDL into code for the various languages, Java among them. I give an example of this approach below.



Needless to say, such approaches for extending this article's framework will be more involved than the framework itself, but they will work along similar lines.

Removing or replacing the XML-RPC implementation

A productive system might prefer to avoid the use of an intermediate XML-RPC framework and instead transfer the XML data of XML-RPC straight into suitable objects. You might consider abstracting calls to the XML-RPC framework by hiding them behind suitable interfaces that can be implemented for various XML-RPC implementations. As I have seen no need to do so in our work, I have not implemented this functionality. Again, you are invited to adapt the framework as suits your needs.

Remote Method Invocation

With J2SE 1.5, RMI will also use the proxy mechanism under the hood. Using the rmic compiler to generate stub classes is no longer necessary (unless you wish to interoperate with older versions). Thus, if a generated stub class cannot be loaded, the remote object's stub will be an instance of java.lang.reflect.Proxy.

Interface definition language

An obvious way to remove some of the pains involved in observing bean conventions and the various restrictions imposed by XML-RPC, which I have discussed above, is to avoid writing interfaces and beans and instead generate them with a suitable IDL. Such a language might look as follows:



Listing 12. IDL




module partner;



exception NoPartnerException < 123 : "No partner found" >;





struct Partner {

   int id;

   string name;

   int age;

   date birthday;

};



interface PartnerHome {

   Partner getPartner(int id) throws NoPartnerException;

   Partner[] findPartner(string name, date bday) throws NoPartnerException;

};



Writing a parser and code generator based on such an IDL offers an easy way to facilitate cross-language communication.

Summary

In this article, I have shown how the power of Java reflection can be used to transparently wrap the complexity of remote method invocation via XML-RPC. I have placed particular emphasis on often overlooked mechanisms that have been incorporated in the Proxy, Array, and Introspector classes. Based on these utilities, a simple middleware framework for remote method invocation has been constructed that can be readily adapted to various needs.



About the author

Stephan Maier holds a Ph.D. in mathematics and has been involved in software development for more than five years. He has been a teacher and coach of state-of-the-art technology for most of his career. Apart from programming, he enjoys singing and sports. Currently, he is working on a compiler that turns a simple form of IDL into suitable versions of datastructures and remote interfaces for languages such as Java, Ruby, or Python, where the underlying protocol for remote calls is XML-RPC.





Posted by 아름프로


Generating an XML Document with JAXB


by Deepak Vohra

12/15/2004





An XML Schema represents the structure of an XML document in XML syntax. J2EE developers may require an XML document to conform to an XML Schema. The Java Architecture for XML Binding (JAXB) provides a binding compiler, xjc, to generate Java classes from an XML Schema. The Java classes generated with the JAXB xjc utility represent the different elements and complexTypes in an XML Schema. (A complexType provides for constraining an element by specifying the attributes and elements in an element.) An XML document that conforms to the XML Schema may be constructed from the Java classes.



In this tutorial, JAXB is used to generate Java classes from an XML Schema. An example XML document shall be created from the Java classes. This article is structured into the

following sections.


  1. Preliminary Setup
  2. Overview
  3. Generating Java Classes from XML Schema
  4. Creating an XML Document from Java Classes




Preliminary Setup




To generate Java classes from an XML Schema with the JAXB, the JAXB API classes and the xjc utility are required in the CLASSPATH variable. Install the Java Web Service Developer Pack (JWSDP) 1.5 to a installation directory. Add the following .jar files to the CLASSPATH variable.



  • /jaxb/lib/jaxb-api.jar
  • /jaxb/lib/jaxb-impl.jar
  • /jaxb/lib/jaxb-libs.jar
  • /jaxb/lib/jaxb-xjc.jar

  • /jwsdp-shared/lib/namespace.jar
  • /jwsdp-shared/lib/jax-qname.jar
  • /jwsdp-shared/lib/relaxngDatatype.jar



is the directory in which Java Web Service Developer Pack 1.5 is installed. Add /jaxb/bin to the PATH variable. The /jaxb/bin directory contains the xjc compiler. Add the /jwsdp-shared/bin directory to the PATH variable. The /jwsdp-shared/bin directory contains the setenv batch file to set the environment variables JAVA_HOME, ANT_HOME, and JWSDP_HOME.


Overview




JAXB generates Java classes and interfaces corresponding to the top-level elements and top-level complexType elements. In a XML Schema, an element is represented with , and a complexType is represented with . In this tutorial, an example schema that represents articles published in a scientific journal is compiled with the JAXB binding compiler. This schema has top-level element and complexType declarations. The example XML Schema, catalog.xsd, is below.































Some of the XML Schema constructs are not supported by JAXB. If such unsupported constructs are included in a schema, an error will be generated when you try to generate Java classes from them with xjc. The following schema elements are not supported: xs:any, xs:anyAttribute, xs:notation, xs:redefine, xs:key, xs:keyref, and xs:unique. The following schema attributes are not supported: complexType.abstract, element.abstract, element.substitutionGroup, xsi:type, complexType.block, complexType.final, element.block, element.final, schema.blockDefault, and schema.finalDefault.


Generating Java Classes


The xjc utility is run on the schema to bind a schema to Java classes. Run the xjc utility on the example schema with the command:

>xjc catalog.xsd





Some of the options for the xjc command-line interface are listed in the table:












-nvStrict validation of the input schema(s) is not performed.
-b Specifies the external binding file.
-d Specifies the directory for generated files.
-p Specifies the target package.
-classpath Specifies classpath.
-use-runtime The impl.runtime package does not get generated.
-xmlschema The input schema is a W3C XML Schema (default).






For the example schema catalog.xsd, xjc generates 45 classes, as shown by xjc's output below:


parsing a schema...
compiling a schema...
generated\impl\runtime\ErrorHandlerAdaptor.java
generated\impl\runtime\MSVValidator.java
generated\impl\runtime\NamespaceContext2.java
generated\impl\runtime\UnmarshallableObject.java
generated\impl\runtime\MarshallerImpl.java
generated\impl\runtime\ValidationContext.java
generated\impl\runtime\UnmarshallerImpl.java
generated\impl\runtime\DefaultJAXBContextImpl.java
generated\impl\runtime\ContentHandlerAdaptor.java
generated\impl\runtime\GrammarInfoFacade.java
generated\impl\runtime\UnmarshallingContext.java
generated\impl\runtime\UnmarshallingEventHandlerAdaptor.java
generated\impl\runtime\XMLSerializable.java
generated\impl\runtime\Discarder.java
generated\impl\runtime\PrefixCallback.java
generated\impl\runtime\SAXMarshaller.java
generated\impl\runtime\NamespaceContextImpl.java
generated\impl\runtime\UnmarshallingEventHandler.java
generated\impl\runtime\GrammarInfo.java
generated\impl\runtime\InterningUnmarshallerHandler.java
generated\impl\runtime\ValidatableObject.java
generated\impl\runtime\GrammarInfoImpl.java
generated\impl\runtime\ValidatingUnmarshaller.java
generated\impl\runtime\ValidatorImpl.java
generated\impl\runtime\SAXUnmarshallerHandlerImpl.java
generated\impl\runtime\XMLSerializer.java
generated\impl\runtime\Util.java
generated\impl\runtime\SAXUnmarshallerHandler.java
generated\impl\runtime\AbstractUnmarshallingEventHandlerImpl.java
generated\impl\ArticleImpl.java
generated\impl\ArticleTypeImpl.java
generated\impl\CatalogImpl.java
generated\impl\CatalogTypeImpl.java
generated\impl\JAXBVersion.java
generated\impl\JournalImpl.java
generated\impl\JournalTypeImpl.java
generated\Article.java
generated\ArticleType.java
generated\Catalog.java
generated\CatalogType.java
generated\Journal.java
generated\JournalType.java
generated\ObjectFactory.java
generated\bgm.ser
generated\jaxb.properties



A Java interface and a Java class are generated corresponding to each top-level xs:element and top-level xs:complexType in the example XML Schema. A factory class (ObjectFactory.java), consisting of methods to create interface objects, also gets generated.
The ObjectFactory.java class is in this article's sample code file,
jaxb-java-resources.zip.




Catalog.java is the interface generated corresponding to the top-level element catalog. An interface generated from a schema element extends the javax.xml.bind.Element class. Catalog.java is illustrated in the listing below.



package generated;
public interface Catalog
extends javax.xml.bind.Element, generated.CatalogType
{
}



CatalogType.java is the generated interface corresponding to the top-level complexType catalogType. The CatalogType interface consists of the getter and setter methods for each of the attributes of the catalog element, and a getter method for the journal elements in the catalog element. CatalogType.java is illustrated in the following listing.




package generated;
public interface CatalogType {
java.lang.String getSection();
void setSection(java.lang.String value);
java.util.List getJournal();
java.lang.String getPublisher();
void setPublisher(java.lang.String value);
}





CatalogImpl.java and CatalogTypeImpl.java are the Java classes generated for the Catalog.java and CatalogType.java interfaces, respectively.







Creating an XML Document from the Java Classes


In this section, an example XML document shall be created from the Java classes generated with JAXB. The example XML document, catalog.xml, is illustrated in the following listing.







section="Java Technology"
publisher="IBM developerWorks">


Service Oriented Architecture Frameworks
Naveen Balani


Advance DAO Programming
Sean Sullivan


Best Practices in EJB Exception Handling
Srikanth Shenoy







Create a CatalogImpl class object from the Java classes and marshal the CatalogImpl class object with a Marshaller to construct an XML document.



Creating the Marshaller




First, import the javax.xml.bind package, which consists of the Marshaller, UnMarshaller, and JAXBContext classes. The Marshaller class is used to convert a Java object into XML data. The UnMarshaller class converts an XML document to a Java object.



import javax.xml.bind.*;


Create a JAXBContext. A JAXBContext object is required to implement the JAXB binding framework operations marshal, unmarshal, and validate. An application creates a new instance (object) of the JAXBContext class with the static method newInstance(String contextPath). The contextPath specifies a list of Java package names for the schema-derived interfaces.



JAXBContext jaxbContext=JAXBContext.newInstance("generated");


The directory generated contains the JAXB-generated classes and interfaces.



Create a Marshaller with the createMarshaller method. The Marshaller class has overloaded marshal methods to marshal (that is, convert a Java object to XML data) into SAX2 events, a Document Object Model (DOM) structure, an OutputStream, a javax.xml.transform.Result, or a java.io.Writer object.




Marshaller marshaller=jaxbContext.createMarshaller();


Creating a Java Object for an XML Document: CatalogImpl



To create a Java object, first create an ObjectFactory. An implementation class instance is
created with the ObjectFactory. For each of the schema-derived Java classes, a static factory method to produce an object of the class is defined in the ObjectFactory.




ObjectFactory factory=new ObjectFactory(); 


Create a catalog element with the createCatalog method of the ObjectFactory class. CatalogImpl is the implementation class for the interface Catalog.



CatalogImpl catalog=(CatalogImpl)(factory.createCatalog());



Set the section attribute of the catalog element with the setSection method in the CatalogImpl class.




catalog.setSection("Java Technology");



Set the publisher attribute of the catalog element with the setPublisher method.



catalog.setPublisher("IBM developerWorks");




Creating a Java Object for an XML Document: JournalImpl and ArticleImpl



Create a journal element with the createJournal method of the ObjectFactory class. JournalImpl is the implementation class for the interface Journal.



JournalImpl journal=(JournalImpl)(factory.createJournal());



Add the journal element to the catalog element. Obtain a java.util.List of JournalImpl for a CatalogImpl and add the journal element to the List.



java.util.List journalList=catalog.getJournal();
journalList.add(journal);



Create the article element in the journal element with the createArticle method of the ObjectFactory class. ArticleImpl is the implementation class for the Article interface.



ArticleImpl article=(ArticleImpl)(factory.createArticle());


Set the level attribute of the article element with the setLevel method in the ArticleImpl class.



article.setLevel("Intermediate");



Set the date attribute of the article element with the setDate method.



article.setDate("January-2004");



Create the title element in the article element with the setTitle method.



article.setTitle("Service Oriented Architecture Frameworks");



Create the author element of the article element with the setAuthor method.



article.setAuthor("Naveen Balani");



Add the article element to the journal element. Obtain a java.util.List of ArticleImpl for a JournalImpl and add the article element to the List.




java.util.List  articleList=journal.getArticle();
articleList.add(article);




Similar to the article element created with the procedure explained, add the other article elements to create the example XML document catalog.xml.



Marshalling the Java Object to an XML Document



Marshal the CatalogImpl object to an XML document with the marshal method of the class Marshaller. The CatalogImpl object is marshalled to an OutputStream.




marshaller.marshal(catalog, new FileOutputStream(xmlDocument));



xmlDocument is the output XML java.io.File object, representing the XML document shown at the beginning of this section.
JAXBConstructor.java, the program used to create an XML document from the Java classes, is in this article's sample code file,
jaxb-java-resources.zip.




Conclusion


JAXB provides a xjc binding compiler to generate Java objects from a schema, which may be subsequently marshalled to an XML document. However, JAXB has a limitation: it does not support all of the XML schema constructs.





Resources





Deepak Vohra
is a NuBean consultant and a web developer.




Posted by 아름프로


The Hidden Gems of Jakarta Commons, Part 1


by Timothy M. O'Brien

12/22/2004



If you are not familiar with the href="http://jakarta.apache.org/commons">Jakarta Commons, you have
likely reinvented a few wheels. Before you write any more generic
frameworks or utilities, grok the Commons. It will save you serious
time. Too many people write a StringUtils class that
duplicates methods available in href="http://jakarta.apache.org/commons/lang">Commons Lang's
StringUtils, or developers unknowingly recreate the
utilities in href="http://jakarta.apache.org/commons/collections">Commons
Collections even though commons-collections.jar is
already available in the classpath. Seriously, take a break. Check
out the Commons Collections API and then go back to your task; I
promise you'll find something simple that will save you a week over
the next year. If people just took some time to look at Jakarta
Commons, we would have much less code duplication--we'd start making
good on the real promise of reuse. I've seen it happen; somebody digs
into Commons BeanUtils or Commons Collections and invariably they have
a "Oh, if I had only known about this, I wouldn't have written 10,000
lines of code" moment. There are still parts of Jakarta Commons that
remain a mystery to most; for instance, many have yet to hear of href="http://jakarta.apache.org/commons/cli">Commons CLI or href="http://jakarta.apache.org/commons/configuration">Commons
Configuration, and most have yet to notice the valuable
functors package in Commons Collections. In this series,
I emphasize some of the less-appreciated tools and utilities in the
Jakarta Commons.




In this first part of the series, I explore XML rule set definitions in
the Commons
Digester
, functors available in Commons Collections, and an
interesting application, href="http://jakarta.apache.org/commons/jxpath">Commons JXPath, to
query a List of objects. Jakarta Commons
contains utilities that aim to help you solve problems at the lowest
level of programming: iterating over collections, parsing XML, and
selecting objects from a List. I would encourage you to
spend some time focusing on these small utilities, as learning about
the Jakarta Commons will save you a substantial amount of time. It
isn't simply about using Commons Digester to parse XML or using
CollectionUtils to filter a collection with a
Predicate. You will start to see benefits once you
realize how to combine the power of these utilities and how to relate
Commons projects to your own applications; once this happens, you will
come to see commons-lang.jar,
commons-beanutils.jar, and
commons-digester.jar as just as indispensable to any system as
the JVM itself.







Related Reading



Jakarta Commons Cookbook

Jakarta Commons Cookbook


By Timothy M.쟏'Brien




Table of Contents

Index

Sample Chapter





Read Online--Safari
Search this book on Safari:





 



Code Fragments only







If you are interested in learning more about the Jakarta Commons,
check out the Jakarta Commons
Cookbook
. This book is full of recipes that will get you hooked
on the Commons, and tells you how to use Jakarta Commons in concert
with other small open source components such as href="http://jakarta.apache.org/velocity">Velocity, href="http://www.freemarker.org">FreeMarker, href="http://jakarta.apache.org/lucene">Lucene, and href="http://jakarta.apache.org/slide">Jakarta Slide. In this
book, I introduce a wide array of tools from Jakarta Commons from
using simple utilities in Commons Lang to combining Commons Digester,
Commons Collections, and Jakarta Lucene to search the works of William
Shakespeare. I hope this series and the href="http://www.oreilly.com//catalog/jakartackbk">Jakarta Commons Cookbook provide you
with some interesting solutions for low-level programming problems.



1. XML-Based Rule Sets for Commons Digester




Commons Digester
1.6
provides one of the easiest ways to turn XML into objects.
Digester has already been introduced on the O'Reilly network in two
articles: "Learning
and Using Jakarta Digester
," by Philipp K. Janert, and " href="http://www.oreillynet.com/pub/a/onjava/2003/07/09/commons.html"/>Using the Jakarta
Commons, Part 2," by Vikram Goyal. Both articles demonstrate the use of
XML rule sets, but this idea of defining rule sets in XML has not
caught on. Most sightings of the Digester appear to define rule sets
programmatically, in compiled code. You should avoid hard-coding
Digester rule sets in compiled Java code when you have the opportunity
to store such mapping information in an external file or a classpath
resource. Externalizing a Digester rule set makes it easier to adapt to an
evolving XML document structure or an evolving object model.




To demonstrate the difference between defining rule sets in XML and
defining rule sets in compiled code, consider a system to parse XML to
a Person bean with three properties--id,
name, and age, as defined in the following class:



package org.test;

public class Person {
public String id;
public String name;
public int age;

public Person() {}

public String getId() { return id; }
public void setId(String id) {
this.id = id;
}

public String getName() { return name; }
public void setName(String name) {
this.name = name;
}

public int getAge() { return age; }
public void setAge(int age) {
this.age = age;
}
}



Assume that your application needs to parse an XML file containing
multiple person elements. The following XML file,
data.xml, contains two person elements
that you would like to parse into Person objects:



<people>
<person id="1">
<name>Tom Higgins</name>
<age>25</age>
</person>
<person id="2">
<name>Barney Smith</name>
<age>75</age>
</person>
<person id="3">
<name>Susan Shields</name>
<age>53</age>
</person>
</people>



You expect the structure and content of this XML file to change over
the next few months, and you would prefer not to hard-code the
structure of the XML document in compiled Java code. To do this, you
need to define Digester rules in an XML file that is loaded as a
resource from the classpath. The following XML document,
person-rules.xml, maps the person element to
the Person bean:



<digester-rules>
<pattern value="people/person">
<object-create-rule classname="org.test.Person"/>
<set-next-rule methodname="add"
paramtype="java.lang.Object"/>
<set-properties-rule/>
<bean-property-setter-rule pattern="name"/>
<bean-property-setter-rule pattern="age"/>
</pattern>
</digester-rules>



All this does is instruct the Digester to create a new instance of
Person every time it encounters a person
element, call add() to add this Person to an
ArrayList, set any bean properties that match attributes
on the person element, and set the name and
age properties from the sub-elements name
and age. You've seen the Person class, the
XML document to be parsed, and the Digester rule definitions in XML
form. Now you need to create an instance of Digester with
the rules defined in person-rules.xml. The following
code creates a Digester by passing the URL
of the person-rules.xml resource to the
DigesterLoader. Since the person-rules.xml
file is a classpath resource in the same package as the class parsing
the XML, the URL is obtained with a call to
getClass().getResource(). The
DigesterLoader then parses the rule definitions and adds
these rules to the newly created Digester:



import org.apache.commons.digester.Digester;
import org.apache.commons.digester.xmlrules.DigesterLoader;

// Configure Digester from XML ruleset
URL rules = getClass().getResource("./person-rules.xml");
Digester digester =
DigesterLoader.createDigester(rules);

// Push empty List onto Digester's Stack
List people = new ArrayList();
digester.push( people );

// Parse the XML document
InputStream input = new FileInputStream( "data.xml" );
digester.parse( input );



Once the Digester has parsed the XML in
data.xml, three Person objects should be in
the people ArrayList.




The alternative to defining Digester rules in XML is to add them using
the convenience methods on a Digester instance. Most
articles and examples start with this method, adding rules using the
addObjectCreate() and
addBeanPropertySetter() methods on Digester.
The following code adds the same rules that were defined in
person-rules.xml:



digester.addObjectCreate("people/person", 
Person.class);
digester.addSetNext("people/person",
"add",
"java.lang.Object");
digester.addBeanPropertySetter("people/person",
"name");
digester.addBeanPropertySetter("people/person",
"age");



If you have ever found yourself working at an organization with
2500-line classes to parse a huge XML document with SAX, or a whole
collection of classes to work with DOM or JDOM, you understand that
XML parsing is more complex than it needs to be, in the majority of
cases
. If you are building a highly efficient system with strict
speed and memory requirements, you need the speed of a SAX parser. If
you need the complexity of the DOM Level 3, use a parser like href="http://xml.apache.org/#xerces">Apache Xerces. But if you
are simply trying to parse a few XML documents into objects, take a
look at Commons Digester, and define your rule set in an XML file.




Any time you can move this type of configuration outside of compiled
code, you should. I would encourage you to define your digester rules
in an XML file loaded either from the file system or the classpath.
Doing so will make it easier to adapt your program to changes in the
XML document and changes in your object model. For more information
on defining Digester rules in an XML file, see Section 6.2 of the href="http://www.oreilly.com/catalog/jakartackbk">Jakarta Commons Cookbook, "Turning
XML Documents into Objects."
























2. Functors in Commons Collections




Functors are an interesting part of Commons Collections 3.1 for two
reasons: they haven't received the attention they warrant, and they
have the potential to change the way you approach programming.
Functor is just a fancy name for an object that
encapsulates a function--a "functional object." And
while they are certainly not the same thing, if you have ever used
method pointers in C or C++, you'll understand the power of functors.
A functor is an object--a Predicate, a
Closure, or a
Transformer. Predicates evaluate objects and
return a boolean, Transformers evaluate objects and
return new objects, and Closures accept objects and
execute code. Functors can be combined into composite functors that
model loops, logical expressions, and control structures, and functors
can also be used to filter and operate upon items in a collection.




Explaining functors in an article as short as this may be impossible,
so to "jump start" your introduction to functors, I will solve the same problem both with and without functors.
In this example, Student objects from an
ArrayList are sorted into two List instances
if they meet certain criteria; students with straight-A grades are
added to an honorRollStudents list, and students with Ds
and Fs are added to a problemStudents list. After the
students are separated, the system will iterate through each list,
giving the honor-roll students an award and scheduling a meeting with
parents of problem students. The following code implements this
process without the use of functors
:



List allStudents = getAllStudents();

// Create 2 ArrayLists to hold honorRoll students
// and problem students
List honorRollStudents = new ArrayList();
List problemStudents = new ArrayList();

// Iterate through all students. Put the
// honorRoll students in one List and the
// problem students in another.
Iterator allStudentsIter = allStudents.iterator();
while( allStudentsIter.hasNext() ) {
Student s = (Student) allStudentsIter.next();

if( s.getGrade().equals( "A" ) ) {
honorRollStudents.add( s );
} else if( s.getGrade().equals( "B" ) &&
s.getAttendance() == PERFECT) {
honorRollStudents.add( s );
} else if( s.getGrade().equals( "D" ) ||
s.getGrade().equals( "F" ) ) {
problemStudents.add( s );
} else if( s.getStatus() == SUSPENDED ) {
problemStudents.add( s );
}
}

// For all honorRoll students, add an award and
// save to the Database.
Iterator honorRollIter =
honorRollStudents.iterator();
while( honorRollIter.hasNext() ) {
Student s = (Student) honorRollIter.next();

// Add an award to student record
s.addAward( "honor roll", 2005 );
Database.saveStudent( s );
}

// For all problem students, add a note and
// save to the database.
Iterator problemIter = problemStudents.iterator();
while( problemIter.hasNext() ) {
Student s = (Student) problemIter.next();

// Flag student for special attention
s.addNote( "talk to student", 2005 );
s.addNote( "meeting with parents", 2005 );
Database.saveStudent( s );
}



The previous example is very procedural; the only way to figure out
what happens to a Student object is to step through each
line of code. The first half of this example is decision logic that
applies tests to each Student object and classifies
students based on performance and attendance. The second half of this
example operates on the Student objects and saves the result to the
database. A 50-line method body like the previous example is how most
systems begin--manageable procedural complexity. But problems start
to appear when the requirements start to shift. As soon as that
decision logic changes, you will need to start adding more clauses to
the logical expressions in the first half of the previous example.
For example, what happens to your logical expression if a student is
classified as a problem if he has a B and perfect attendance, but
attended detention more than five times? Or what happens to the
second half, when a student can be on the honor roll only if they were
not a problem last year? When exceptions and requirement changes
start to affect procedural code, manageable complexity turns into
unmaintainable spaghetti code.




Step back from the previous example and consider what that code was
doing. It was looking at every object in a List,
applying a criteria, and, if that criteria was satisfied, acting upon
an object. A critical improvement that could be made to the previous
example is the decoupling of the criteria from the code that acts upon
an object. The following two code excerpts solve the previous problem
in a very different way. First, the criteria for the honor roll and
problem students are modeled by two Predicate objects,
and the code that acts upon honor roll and problem students is
modeled by two Closure objects. These four objects are
defined below:



import org.apache.commons.collections.Closure;
import org.apache.commons.collections.Predicate;

// Anonymous Predicate that decides if a student
// has made the honor roll.
Predicate isHonorRoll = new Predicate() {
public boolean evaluate(Object object) {
Student s = (Student) object;

return( ( s.getGrade().equals( "A" ) ) ||
( s.getGrade().equals( "B" ) &&
s.getAttendance() == PERFECT ) );
}
};

// Anonymous Predicate that decides if a student
// has a problem.
Predicate isProblem = new Predicate() {
public boolean evaluate(Object object) {
Student s = (Student) object;

return ( ( s.getGrade().equals( "D" ) ||
s.getGrade().equals( "F" ) ) ||
s.getStatus() == SUSPENDED );
}
};

// Anonymous Closure that adds a student to the
// honor roll
Closure addToHonorRoll = new Closure() {
public void execute(Object object) {
Student s = (Student) object;

// Add an award to student record
s.addAward( "honor roll", 2005 );
Database.saveStudent( s );
}
};

// Anonymous Closure flags a student for attention
Closure flagForAttention = new Closure() {
public void execute(Object object) {
Student s = (Student) object;

// Flag student for special attention
s.addNote( "talk to student", 2005 );
s.addNote( "meeting with parents", 2005 );
Database.saveStudent( s );
}
};



The four anonymous implementations of Predicate and
Closure are separated from the system as a whole.
flagForAttention has no knowledge of what the criteria
are for a problem student, and the isProblem Predicate
only knows how to identify a problem student. What is needed is a way
to marry the right Predicate with the right
Closure, and this is shown in the following example.



import org.apache.commons.collections.ClosureUtils;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.collections.functors.NOPClosure;

Map predicateMap = new HashMap();

predicateMap.put( isHonorRoll, addToHonorRoll );
predicateMap.put( isProblem, flagForAttention );
predicateMap.put( null, ClosureUtils.nopClosure() );

Closure processStudents =
ClosureUtils.switchClosure( predicateMap );

CollectionUtils.forAllDo( allStudents, processStudents );



In the previous code, the predicateMap matches
Predicates to Closures; if a
Student satisfies the Predicate in the key,
it will be passed to the Closure in the value. By
supplying a NOPClosure value and a null key,
we will pass Student objects that satisfy neither
Predicate to a "do nothing" or "no operation"
NOPClosure created by a call to
ClosureUtils. A SwitchClosure,
processStudents, is created from the
predicateMap, and the processStudents
Closure is applied to every Student object
in the allStudents using
CollectionUtils.forAllDo(). This is a very different
approach; notice that you are not iterating through any lists.
Instead, you set rules and consequences and
CollectionUtils and SwitchClosure take care
of the execution.





When you separate criteria using Predicates and actions
using Closures, your code is less procedural and much
easier to test. The isHonorRoll Predicate can be unit
tested in isolation from the addToHonorRoll Closure, and
both can be tested by supplying a mock instance of the
Student class. The second example also demonstrates
CollectionUtils.forAllDo(), which applies a
Closure to every element in a Collection.
You may have noticed that using functors did not reduce the line count; in
fact, the use of functors increased the line count. But the real benefit
from functors is the modularity and encapsulation of criteria and
actions. If your method length tends towards hundreds of lines,
consider an less procedural, more object-oriented approach--use a
functor.




Chapter 4, "Functors," in the Jakarta Commons
Cookbook
introduces functors available in Commons Collections, and
Chapter 5, "Collections," shows you how to use functors with the Java Collections
API. All of the functors--Closure,
Predicate, and Transformer--can be combined
into composite functors that can be used to model any kind of logic.
switch, while, and for
structures can be modeled with SwitchClosure,
WhileClosure, and ForClosure. Compound
logical expressions can be constructed from multiple
Predicates using OrPredicate,
AndPredicate, AllPredicate, and
NonePredicate, among others. Commons BeanUtils also
contains functor implementations that are used to apply functors to
bean properties--BeanPredicate,
BeanComparator, and
BeanPropertyValueChangeClosure. Functors are a different
way of thinking about low-level application architecture, and they
could very well change your approach to coding.



3. Using XPath Syntax to Query Objects and Collections




Commons JXPath
is a surprising (non-standard) use of an XML standard. XPath has been
around for some time as a way to select a node or node set in an XSL
style sheet. If you've worked with XML, you are probably familiar with
the syntax /foo/bar that selects the bar
sub-elements of the foo document element. Jakarta Commons
JXPath adds an interesting twist: you can use JXPath to select objects
from beans and collections, among other object types such as servlet
contexts and DOM Document objects. Consider a
List of Person objects. Each
Person object has a bean property of the type
Job, and each Job object has a
salary property of the type int.
Person objects also have a country property,
which is a two-letter country code. Using JXPath, it is easy to
select all Person objects with a US country
and a Job that pays more than one million
dollars. Here is some code to set up a List of beans to
filter with JXPath:



// Person's constructor sets firstName and country
Person person1 = new Person( "Tim", "US" );
Person person2 = new Person( "John", "US" );
Person person3 = new Person( "Al", "US" );
Person person4 = new Person( "Tony", "GB" );

// Job's constructor sets name and salary
person1.setJob( new Job( "Developer", 40000 ) );
person2.setJob( new Job( "Senator", 150000 ) );
person3.setJob( new Job( "Comedian", 3400302 ) );
person4.setJob( new Job( "Minister", 2000000 ) );

Person[] personArr =
new Person[] { person1, person2,
person3, person4 };

List people = Arrays.asList( personArr );



The people List contains four
Person beans: Tim, John, Al, and George. Tim is a
developer who makes $40,000, John is a Senator who makes $150,000, Al
is a comedian who walks home with $3.4 million, and Tony is a prime
minister who makes 2 million euros. Our task is simple: iterate over
this List and print the name of every Person
who is a U.S. citizen making over one million dollars. Assume that
people is an ArrayList of
Person objects, and take a look at the solution without
the benefit of JXPath:



Iterator peopleIter = people.getIterator();
while( peopleIter.hasNext() ) {
Person person = (Person) peopleIter.next();

if( person.getCountry() != null &&
person.getCountry().equals( "US" ) &&
person.getJob() != null &&
person.getJob().getSalary() > 1000000 ) {
print( person.getFirstName() + " "
person.getLastName() );
}
}
}
}



The previous example is heavy, and somewhat error-prone. To find the
matching Person objects, you first need to iterate over
each Person and test the country property of
each. If the country property is not null
and it has the correct value, then you must test the job
property to find out if it is non-null and has
salary property greater than 1000000. The line count of
the previous example can be dramatically reduced with Java 1.5's
for syntax, but, even with Java 1.5, you still need to
perform two comparisons at two different levels.




What if you had to write a number of these queries against a set of
Person objects stored in memory? What if your
application had to display all of the Person objects in
England named Tony? Or, what if you had to print the name
of every Job with a salary less than 20,000? If you were
storing these objects in a relational database, you could solve this
by writing a SQL query, but if you are dealing with objects in memory,
you don't have this luxury. While XPath was primarily meant for XML,
you could use it to write "queries" against a collection of objects,
treating objects as elements and bean properties as sub-elements.
Yes, this is a strange application of XPath, but take a look at how
the following example performs three different queries against
people, an ArrayList of Person
objects.



import org.apache.commons.jxpath.JXPathContext;

public List queryCollection(String xpath,
Collection col) {
List results = new ArrayList();

JXPathContext context =
JXPathContext.newContext( col );

Iterator matching =
context.iterate( xpath );

while( matching.hasNext() ) {
results.add( matching.getNext() );
}
return results;
}

String query1 =
".[@country = 'US']/job[@salary > 1000000]/..";
String query2 =
".[@country = 'GB' and @name = 'Tony']";
String query3 =
"./job/name";

List richUsPeople =
queryCollection( query1, people );
List britishTony =
queryCollection( query2, people );
List jobNames =
queryCollection( query3, people );



The method queryCollection() takes an XPath expression
and applies it to a Collection. XPath expressions are
evaluated against a JXPathContext, which is created by
calling JXPathContext.newContext() and passing in the
Collection to be queried. Calling
context.iterate() then applies the XPath expression to
each item in the Collection, returning an
Iterator with every matching "node" (or in this
case, "object"). The first query performed by the previous
example, query1, is same query from the original example
implemented without JXPath. query2 selects all
Person objects with a country property of
GB and a name property of Tony,
and query3 selects a List of String
objects, the name property of all of the Job
objects.




When I first saw Commons JXPath, it struck me as a bad idea. Why apply
XPath expressions to objects? Something about it didn't feel right.
But this unexpected use of XPath as a query language for a collection
of beans has come in handy for me more than a few times in the past few
years. If you find yourself looping through lists to find matching
elements, consider using JXPath. For more information, see Chapter 12,
"Searching and Filter," of Jakarta
Commons Cookbook
, which discusses Commons JXPath and Jakarta Lucene
paired with Commons Digester.



And There's More




Stay tuned to this exploration of the far reaches of the Jakarta
Commons. In the next part of this series, I'll introduce some related
tools and utilities. Set operations in Commons Collections, using
Predicate objects with collections, configuring an application with href="http://jakarta.apache.org/commons/configuration">Commons
Configuration, and using href="http://jakarta.apache.org/commons/betwixt">Commons Betwixt
to read and write XML. There is much to be gained from the Jakarta
Commons that cannot be conveyed in a few thousand words, and I would
encourage you to take a look at the href="http://www.oreilly.com/catalog/jakartackbk">Jakarta Commons Cookbook. Many of
these utilities may, at first glance, seem somewhat trivial, but the
power of Jakarta Commons lies in how these tools can be combined with
each other and integrated into your own systems.




Timothy M. O'Brien
is a professional singer/programmer living and working in the Chicago area.




Posted by 아름프로


Working with Hibernate in Eclipse
Working with Hibernate in Eclipse


by James Elliott, author of Hibernate: A Developer's Notebook

01/05/2005





Editor's Note: With our survey results showing a huge interest in Hibernate, we thought this would be a good week to bring back this piece, by the author of O'Reilly's Hibernate book, on how to use Hibernate with Eclipse, which was also a top vote-getter in the poll.



Introduction





I recently started using Eclipse as my
development environment, in part because of its support for the many platforms
on which I develop, and in part because Eclipse is a great example of the power of
an open, extensible environment in which people all around the world can
contribute. I'm beginning to investigate the extensions people have come up
with. For example, I use a little plugin called XMLBuddy to work with XML files, and it's very helpful. So I became curious about whether anyone had written plugins to work
with Hibernate, since I've done so much of
that recently in putting together the Developer's Notebook. It turns out there are several such efforts underway; in this article we will
explore one of them--the Hibernate Synchronizer.



Hibernate Synchronizer



Of the plugins I've found so far, the Hibernate Synchronizer
interested me most because it seems to best support the kind of mapping-centric
workflow I adopted throughout my Developer's Notebook. (Hibernate can be used
in many different ways, so check out the
other plugins available; these may be more helpful if your environment calls for another approach.) In fact, the Hibernate Synchronizer plugin removes the need for you to think about updating your Java code when you change your mapping document. In a very Eclipse-like way, it
automatically updates the Java code as you edit the mapping. But it goes even
farther than Hibernate's built-in code generation tools by creating a
pair of classes for each mapped object. It "owns" a base class, which
it rewrites at will as you change the mapping, and gives you a subclass that
extends this base class, where you can put business logic and other code,
without fear that it will ever get changed out from under you.


























Contents
Introduction
Hibernate Synchronizer
   Installation
   Configuration
   Generating Code
   Editing Mappings
   Generating the Database Schema
   Trade-Offs
Other Plugins
   HiberClipse
   Hibernator
Learning More




As befits an approach centered around the Hibernate mapping document,
Hibernate Synchronizer includes a new editor component for Eclipse that provides
intelligent assistance and code completion for such documents. A nice DTD-driven
XML editor, such as the aforementioned XMLBuddy, can do some of this for you, but
Hibernate Synchronizer uses its semantic understanding of the documents to go
much further. It also offers a graphical view of the properties and relations in
the mapping, "wizard" interfaces for creating new elements, and other such
niceties. And, as mentioned, in its default configuration the editor
automatically regenerates the data-access classes as you edit their mapping
documents.



There are other pieces to Hibernate Synchronizer, too. It adds a section to
Eclipse's New menu that provides wizards for creating Hibernate
configuration and mapping files, and adds contextual menu entries in the package
explorer and in other appropriate places, providing easy access to relevant
Hibernate operations.



OK, enough abstract description, time to get down to the practical stuff!
After all, you were already probably interested in this, or you wouldn't have
started to read the article. So how do you get and play with Hibernate Synchronizer?



Installation



Hibernate Synchronizer is installed using Eclipse's built-in Update Manager.
The plugin offers separate update sites for users of Eclipse 2.1 and the
forthcoming Eclipse 3. Because I'm using Eclipse for mission-critical work, I'm
still using the production release, 2.1. As I write this, Eclipse 3 has entered
its "release candidate" phase, and I am very much looking forward to being able
to upgrade to a production release of version 3 when I return from JavaOne
later this summer. (The main reason I mention this is to emphasize that the
following instructions are written from an Eclipse 2 perspective; some commands
and screens are undoubtedly different in Eclipse 3, so if you're using it, be
sure to apply your own judgment in following these steps! If it helps, my
impression is that Hibernate Synchronizer's own
install instructions are written for Eclipse 3.)



Fire up Eclipse and open the Update Manager by choosing Help
-> Software Updates -> Update Manager. Once the
Install/Update perspective opens up, right-click (or control-click, if you're
using a one-button mouse) in the Feature Updates view and choose
New -> Site Bookmark, as shown in Figure 1.



Figure 1

Figure 1. Adding the Hibernate Synchronizer plugin site to the Update Manager



In the resulting dialog, enter the URL for the version of the plugin that you need. The URL to be entered depends on your Eclipse version:




  • Eclipse 2.1: http://www.binamics.com/hibernatesync/eclipse2.1

  • Eclipse 3: http://www.binamics.com/hibernatesync



You also need to assign a name for the new bookmark. "Hibernate Synchronizer"
makes a lot of sense. Figure 2 shows the dialog with all required information in
my Eclipse 2.1.2 environment. Once you've got it filled in, click
Finish to add the bookmark.



Figure 2

Figure 2. Bookmark for the Hibernate Synchronizer plugin update site



Once you click Finish, the new bookmark will appear in the Feature Updates
view, as shown in Figure 3.



Figure 3

Figure 3. The Hibernate Synchronizer site is now available for use



To actually install the plugin, click on the disclosure triangle to the left
of the bookmark, and again on the next one that appears inside of it, until you can
see the icon for the plugin itself. When you click on that, the Preview view
will update to show you an interface that allows you to install the plugin, as
shown in Figure 4.



Figure 4

Figure 4. Ready to install the plugin



Click Install Now to actually install it, and let Eclipse walk
you through the process (Figures 5-10).



Figure 5

Figure 5. Installing Hibernate Synchronizer



Figure 6

Figure 6. The plugin license agreement



See Trade-Offs, below, for some discussion about this license agreement. You may wish to read it carefully before deciding to use Hibernate Synchronizer in a project of your
own. I think it's probably fine, but it is confusingly based on the GPL without
actually being open source.



Figure 7

Figure 7. Choosing where to install the plugin; the default is fine



Figure 8

Figure 8. The standard warning for unsigned plugins



Figure 9

Figure 9. The install is underway



Figure 10

Figure 10. The install has completed



Now that the plugin is installed, you need to quit and relaunch Eclipse for
it to take effect. The dialog seems to imply that Eclipse will restart itself,
but in my experience, clicking Yes merely causes the environment to quit, and you
have to relaunch it manually. This may be a limitation of Eclipse 2.1's Mac OS
X implementation; Eclipse 3 is going to be the first release that promises
"first-class" support for OS X. In any case, this is a very minor issue. If you
need to restart Eclipse, do so now, because it's time to start configuring the
plugin to put it through its paces!





















Configuration



Once Eclipse comes back up, you can close the Install/Update perspective. Open
a Java project that uses Hibernate. If you've been going through the examples in
the Developer's Notebook, you'll have several directories from which to choose. I'll
be looking at the examples as they exist in Chapter 3, which is the sample
chapter available
online.
You can also download the source for all of the examples from the book's site.



If you're creating a new Eclipse project to work with one of the example
source directories, just choose File -> New ->
Project, specify that you want to create a Java project and click
Next, give it a name ("Hibernate Ch3" in my case, as shown in
Figure 11), uncheck the Use default checkbox so that you can tell
Eclipse where to find the existing project directory, and hit the
Browse button to locate where it exists on your own drive. At this
point, you can click Finish to create the project, but I generally
like to click Next and double-check the decisions Eclipse is
making. (Of course, if it gets anything wrong, you can always go back and fix
the project properties, but I tend to find it disconcerting to be greeted by a
ton of errors and warnings immediately if there is a library missing or
something.)



Figure 11

Figure 11. Creating a new project to work with Hibernate



In this case, my caution was unnecessary. Eclipse figured out exactly how the
directory was structured and intended to be used, and found all of the third-party
libraries I had downloaded and installed in order to enable Hibernate and the
HSQLDB database engine to run. (A detailed walkthrough of this process is the
bulk of Chapter 1 of my Developer's Notebook.) This kind of smart adaptability
is one of the great features of Eclipse. Figure 12 shows our new project open
and ready for experimentation. It also shows that Eclipse doesn't like to fit
into a window small enough for a reasonable screen shot; I'm going to have to
work with partial window captures from this point on.



Figure 12

Figure 12. The Chapter 3 example project



The next thing we need to do is create a Hibernate configuration file that
Hibernate Synchronizer can use. There is already a hibernate.properties
file in the src directory, which is how the examples in the book work,
but Hibernate Synchronizer only works with Hibernate's XML-based configuration
approach. So we'll need to replicate the contents of hibernate.properties into a new hibernate.cfg.xml file. On the bright side, this gives us our first opportunity to play with a feature of
Hibernate Synchronizer, the configuration file wizard. Choose File
-> New -> Other, click the newly available
Hibernate category, pick Hibernate Configuration File,
and click Next.



Figure 13

Figure 13. Starting the Hibernate Configuration File wizard



When the wizard starts up, the directory it offers to put the file into
depends on the file you've currently got selected in Eclipse. Let's be sure to
put it at the top-level src directory alongside the properties version, for
consistency. Fill in the rest of the information requested by the wizard to
match the properties version of the configuration, as shown in Figure 14. Notice
that, unlike when using Ant to control the execution of Hibernate (which was the
approach used in the Developer's Notebook), we have no way to control the
current working directory when Hibernate is invoked, so we need to use a
fully qualified path to the database file in the URL. In my case, this takes the
(somewhat ungainly) value jdbc:hsqldb:/Users/jim/Documents/Work/OReilly/Hibernate/Examples/ch03/data/music.
(If anyone can tell me how to get Eclipse or Hibernate Synchronizer to use a
particular working directory for a project, I'd certainly be interested. I'm
still a beginner when it comes to Eclipse, so it would not surprise me at all to
learn that this is possible and that I simply don't know how to do it.)



Figure 14

Figure 14. Filling in the configuration file details



Filling in the Driver Class is a little strange: You need to click the
Browse button, and start typing the driver name. If you type
"jdbcD", the window will present only two choices, and you can easily click the
right one. This is illustrated in Figure 15.



Figure 15

Figure 15. Specifying the HSQLDB driver class



Once the wizard is set up to the extent of Figure 14, with values appropriate
for your own installation, you can click Finish to create the
configuration file. Hibernate Synchronizer is now ready to use. It opens the
file it created so you can see the structure and details of an XML configuration
file for Hibernate.




Figure 16

Figure 16. The generated configuration file



A quick way to test that the configuration is working is to play with the
other wizard interface. Choose File -> New ->
Other, click the newly available Hibernate category,
pick Hibernate Mapping File, and click Next. When the
wizard comes up, it should be populated with the settings information we just
entered, and you can click the Refresh button to make sure it can
communicate with the database and show you that it found a TRACK
table. The first time you do this, you might have to confirm the location of the
.jar file containing the HSQLDB driver, for some reason, but that seems to happen
only once. In any case, once you confirm that everything seems to be working,
click Cancel rather than actually creating the mapping, because we
want to work with our hand-created mapping file that already exists.





















Generating Code



This is probably the part you've been waiting for. What cool stuff can we do?
Well, right away there is a new contextual menu entry available for Hibernate
mapping documents.



If you right-click (or control-click) on one, you get a
number of Hibernate-related choices (Figure 17), including one to synchronize.
This is a manual way to ask Hibernate Synchronizer to generate the data access
objects associated with the mapping document.



Figure 17

Figure 17. Synchronizer choices for mapping documents



The Add Mapping Reference choice is also useful: it adds an
entry to the main Hibernate configuration file telling it about this mapping
document, so you don't need to put anything in your source code to request that
the corresponding mapping gets set up. For now, let's look at the result of
choosing Synchronize Files.



This is where things start to get interesting. We end up with two new
sub-packages, one for the "base" data access objects that Hibernate Synchronizer
"owns" and can rewrite at any time, and one for our business objects that
subclass these DAOs, which will not get overwritten, and give us an opportunity
to add business logic to the data class (shown in Figure 18).



Figure 18

Figure 18. The synchronized data access objects, showing our editable subclass



There are many more classes generated this way than by using the normal
Hibernate code generation facilities, which has advantages, as well as some
potential disadvantages, which I discuss later in the
Trade-Offs
section. Note also that in the properties configuration for your project, you can
choose which of these classes get generated for you, as well as the package
structure into which they are generated. I'd demonstrate this, but the current
release of the plugin has a bug
which blocks access to this configuration interface on Mac OS X. A fix has been
made, but not yet released.



Based on the examples on the Hibernate Synchronizer page, I put together the
following class to try inserting some data into the music database using these
new data access objects. It's quite similar to the version using the standard
Hibernate code generator (on pages 39-40 of Hibernate: A Developer's Notebook) and even simpler because the classes generated by Hibernate Synchronizer create and commit a new transaction for each of your database
operations, so you don't need any code to set one up in simple situations like
this. (There are ways of doing so if you need to have a group of operations
operate as a single transaction, of course.) Here's the code for the new
version:



package com.oreilly.hh;

import java.sql.Time;
import java.util.Date;
import net.sf.hibernate.HibernateException;
import com.oreilly.hh.dao.TrackDAO;
import com.oreilly.hh.dao._RootDAO;

/**
* Try creating some data using the Hibernate Synchronizer approach.
*/
public class CreateTest2 {

public static void main(String[] args) throws HibernateException {
// Load the configuration file
_RootDAO.initialize();

// Create some sample data
TrackDAO dao = new TrackDAO();
Track track = new Track("Russian Trance", "vol2/album610/track02.mp3",
Time.valueOf("00:03:30"), new Date(), (short)0);
dao.save(track);

track = new Track("Video Killed the Radio Star",
"vol2/album611/track12.mp3", Time.valueOf("00:03:49"), new Date(),
(short)0);
dao.save(track);

// We don't even need a track variable, of course:
dao.save(new Track("Gravity's Angel", "/vol2/album175/track03.mp3",
Time.valueOf("00:06:06"), new Date(), (short)0));
}
}


Having Eclipse around while I was writing this was very nice. I'd forgotten
how much I missed intelligent code completion while I was writing the examples
for the book, and there are several other things the JDT helps with too.



To run this simple program within Eclipse, we need to set up a new Run
configuration. Choose Run -> Run... with
CreateTest2.java as the currently active editor file. Click on
New and Eclipse figures out that we want to run this class in our
current project, because we created it with a main() method. The
default name it assigns, CreateTest2, is fine. The screen will look
something like Figure 19. Click Run to try creating some data.



Figure 19

Figure 19. Ready to run our creation test in Eclipse



If you've been exactly following along on your own, you'll find that this
first attempt at execution fails: Hibernate complains that the configuration
file contains no mapping references, and at least one is required. Ah ha! So
that's what XMLBuddy was warning about with the yellow underline near
the bottom of Figure 16. We can easily fix this by right-clicking on the Track.hbm.xml
mapping document in the Package Explorer view and choosing Add Mapping
Reference in the new Hibernate Synchronizer submenu. That makes XMLBuddy
happy, and allows the run to get further. Unfortunately, not as far as we might
like, though. The next error was a complaint about not being able to find the
JTA UserTransaction initial context in JNDI. It turned out I wasn't
the only person having this problem; it was discussed in a forum thread, but no one had yet found a solution.



Since I knew I didn't need to use JTA, I wondered why Hibernate was even
trying. I opened up the Hibernate configuration file (Figure 16) and looked for anything suspicious that Hibernate Synchronizer had put there. Sure enough, there were some lines that looked like prime suspects:



 <property name="hibernate.transaction.factory_class"> 
net.sf.hibernate.transaction.JTATransactionFactory
</property>
<property name="jta.UserTransaction">
java:comp/UserTransaction
</property>


Once I tried commenting these out and running again, the third time was
indeed the charm. My run completed with no errors, and my data appeared in the
database. Hurrah! Running the trusty ant db target
(explained in Chapter 1 of the Developer's Notebook) reveals the data in all its
(admittedly simple) glory, as shown in Figure 20. If you're doing this yourself,
be sure to start with an ant schema to create the
database schema or empty out any test data that may be there from previous
experimentation.



Figure 20

Figure 20. The data created by our test program



Note that you can run Ant targets from within Eclipse by right-clicking (or
control-clicking) on the build.xml file within the Package Explorer,
choosing Run Ant, and picking the target using an Eclipse dialog.
Pretty cool.



Figure 21

Figure 21. Running Ant from within Eclipse



Getting data back out using queries is pretty straightforward, although this
time it's a lot closer to the same code you'd use with the ordinary
Hibernate-generated data access classes. Even though Hibernate Synchronizer
generates a number of helper methods for working with named queries, I don't
think any of them is particularly useful, because they all insist on running the
query and returning the list of results, rather than giving you the
Query object to work with yourself. That prevents you from using
any of Query's convenient type-safe parameter setting methods.
Because of that, I decided to stick to having the _RootDAO object
give me a Hibernate Session to work with the "old fashioned" way.
In fairness, I think I could edit the templates used by Hibernate Synchronizer
to generate any methods I'd like, and would almost certainly look into doing
that if I was going to undertake a project with it.





















Actually, on further reflection, because you can only work with a
Query while you've got an active Session, the methods
offered by the DAOs already work the best way they possibly can. You're always
going to have to do your own session management if you want to work with the
query the way I do in this example. You could embed the session management into
the business logic provided in "your" half of the DAO, though, which would give
you the best of both worlds. That's another reason the split-class model offered
by Hibernate Synchronizer is so useful. I explore this insight a bit href="#betterQuery">below.



Anyway, here's the code I first came up with, morally quite equivalent to
that on pages 48-49 of the book:



package com.oreilly.hh;

import java.sql.Time;
import java.util.ListIterator;

import net.sf.hibernate.HibernateException;
import net.sf.hibernate.Query;
import net.sf.hibernate.Session;

import com.oreilly.hh.dao.TrackDAO;
import com.oreilly.hh.dao._RootDAO;

/**
* Use Hibernate Synchronizer's DAOs to run a query
*/
public class QueryTest3 {

public static void main(String[] args) throws HibernateException {
// Load the configuration file and get a session
_RootDAO.initialize();
Session session = _RootDAO.createSession();

try {
// Print the tracks that will fit in five minutes
Query query = session.getNamedQuery(
TrackDAO.QUERY_COM_OREILLY_HH_TRACKS_NO_LONGER_THAN);
query.setTime("length", Time.valueOf("00:05:00"));
for (ListIterator iter = query.list().listIterator() ;
iter.hasNext() ; ) {
Track aTrack = (Track)iter.next();
System.out.println("Track: \"" + aTrack.getTitle() +
"\", " + aTrack.getPlayTime());
}
} finally {
// No matter what, close the session
session.close();
}
}
}


One nice feature that TrackDAO does give us is a static
constant by which we can request the named query, eliminating any chances of
run-time errors due to typos in string literals. I appreciate that! Setting up
and executing a Run configuration for this test class produces the output I'd
expect, as shown in Figure 22.



Figure 22

Figure 22. The query results in Eclipse's console view



As I noted above, after getting this class working, I
realized there was a better way to approach it, given the model offered by
Hibernate Synchronizer. Here's what our TrackDAO object would look
like if we moved the query inside of it, which is where it really belongs, given
that the named query is a feature of the mapping file associated with that data
access object:



package com.oreilly.hh.dao;

import java.sql.Time;
import java.util.List;

import net.sf.hibernate.HibernateException;
import net.sf.hibernate.Query;
import net.sf.hibernate.Session;

import com.oreilly.hh.base.BaseTrackDAO;

/**
* This class has been automatically generated by Hibernate Synchronizer.
* For more information or documentation, visit The Hibernate Synchronizer page
* at http://www.binamics.com/hibernatesync or contact Joe Hudson at joe@binamics.com.
*
* This is the object class that relates to the TRACK table.
* Any customizations belong here.
*/
public class TrackDAO extends BaseTrackDAO {

// Return the tracks that fit within a particular length of time
public static List getTracksNoLongerThan(Time time)
throws HibernateException
{
Session session = _RootDAO.createSession();
try {
// Print the tracks that will fit in five minutes
Query query = session.getNamedQuery(
QUERY_COM_OREILLY_HH_TRACKS_NO_LONGER_THAN);
query.setTime("length", time);
return query.list();
} finally {
// No matter what, close the session
session.close();
}
}
}


This is nice and clean, and it simplifies the main() method in
QueryTest3 even more:



    public static void main(String[] args) throws HibernateException {
// Load the configuration file and get a session
_RootDAO.initialize();

// Print the tracks that fit in five minutes
List tracks = TrackDAO.getTracksNoLongerThan(Time.valueOf("00:05:00"));
for (ListIterator iter = tracks.listIterator() ;
iter.hasNext() ; ) {
Track aTrack = (Track)iter.next();
System.out.println("Track: \"" + aTrack.getTitle() +
"\", " + aTrack.getPlayTime());
}
}


Clearly this is the approach to take when working with named queries and
Hibernate Synchronizer. A quick test confirms that it produces the same output,
and it's much better code.



Whether or not you want to use Hibernate Synchronizer to generate its own
style of data access objects, there is one last major feature to explore.



Editing Mappings



One of the main attractions of Hibernate Synchronizer is its specialized
editor for mapping documents. This editor can be configured to automatically
regenerate the associated data objects whenever you save files, but that's just
a final touch; you might want to use the editor even if you're not using the
plugin's code generator. It gives you smart completion of mapping document
elements, and a graphical outline view in which you can manipulate them, as
well.



There is a trick to getting the editor to work for you, though, at least if
you're starting from the downloaded source code from my Developer's Notebook. In
the download, the mapping documents are named with the extension
".hbm.xml," and the editor is only invoked for files ending with
".hbm". In theory, you can configure the extension mappings within
Eclipse so that both extensions use the plugin's mapping document editor, but I
wasn't able to get that to work, and I saw that someone else on the support
forum had the same problem. So, at least for now, your best bet may be to rename
the files. (If you're going to stick with Ant-based standard code generation, be
sure to update the codegen target in build.xml to use the
new extension, too.)



As soon as I renamed Track.hbm.xml to Track.hbm, its icon
in the Package Explorer was updated to look like the Hibernate logo, and the
default editor became the plugin's, as shown in Figure 23. For whatever reason,
the other Hibernate Synchronizer options (as shown in Figure 17) are available with either extension, but the editor is available only with the shorter version.



Figure 23

Figure 23. The contextual menu for a Hibernate mapping document (with the extension ".hbm")



The editor has context-sensitive completion support for all of the elements
you're adding within the mapping document. Figure 24 shows a couple of examples,
but no screen shots can really capture the depth and usefulness of a feature
like this; I'd very much encourage you to install the plugin and play with it
yourself for a while. You will quickly see how helpful it can be in working with
mapping documents.



Figure 24



Figure 25

Figures 24 and 25. Completion assistance in the mapping document editor



The outline view, shown in Figure 26, gives you a graphical view of the
hierarchy of classes, their mapped elements, named queries, and the like that
are present in your mapping document, as well as giving you a menu offering a
few wizards to help you create new ones.







Figure 26 Figure 27


Figures 26 and 27. The mapping editor's outline view, and the "Add property" wizard



The contextual menu within the editor itself also offers a Format
Source Code option you can use to clean up and re-flow the document.
There are already many neat and useful features in this editor, and it'll be
interesting to see how it grows in the future. My only complaint (and a minor
one at that) is that this editor uses a very different approach to helping you
manage quotation marks when you complete XML attributes than the JDT does in
Java code. Switching back and forth between them can be somewhat disorienting.
(The way the JDT works takes a little getting used to itself, but once you start
trusting it, it's almost magical.)





















Generating the Database Schema



Despite my first impression that everything flowed from the mapping document,
Hibernate Synchronizer doesn't currently offer any support for creating or
updating a database schema from your mapping documents. There has already been a
request posted to the support forum about this, and it wouldn't surprise me if
we saw these features in the future; support shouldn't be too difficult. For
now, you'll have to stick with an approach like the Ant-driven one in
Hibernate: A Developer's Notebook if you're developing your
database from your mappings. Alternately, the Hibernator plugin described
below
does support schema updates from within Eclipse. I may have to
investigate whether it's possible to have both of these plugins installed at the
same time.



Well, I certainly hope this whirlwind tour has given you an sense of the
capabilities offered by the plugin. I haven't covered all of them, by any means,
so do download it and explore on your own if anything has intrigued you.



Trade-Offs



Clearly you can do some neat things with Hibernate Synchronizer. Will I be
using it for my own Hibernate projects? There are some pluses and minuses to
that idea, and I probably won't decide until I get to the point of actually
adopting Hibernate in place of our homebrew (and very simplistic) lightweight
O/R tool at work. That is going to be a significant enough change that we are
putting it off until we tackle a major architecture shift that's on the horizon
for other reasons. Here are some of the factors that will weigh in my
decision.



As mentioned in the Installation section, there is a little bit of concern
about the license out there. The plugin's forum has a
discussion
about this. The current license is based on a custom modification of the GNU GPL
that removes all the source-sharing provisions, but tries to retain the other
aspects of "copyleft" protection. There is some question about the legitimacy of
this, and the author is looking for an alternative. It is clear that the
intention is to protect the plugin, not to encumber any other project that
happens to use the plugin to generate code, but it may be worth carefully
reading the current license to see if you believe that intent has been achieved,
or if there is too much risk for you.



The same discussion reveals that the author had originally released the
plugin as open source, but withdrew it temporarily because he felt it wasn't yet
polished enough to serve as a good example to others. He then had some very
annoying email interactions with hotheads who, sadly, soured him on the whole
idea of sharing the source. It is certainly his prerogative to decide what, if
anything, to share with us. The plugin is a gift to the world, and the author
doesn't owe us anything. But I hope that enough positive interactions with other
users might help convince him to go back to his original plan of sharing the
source. I really value having access to the source code of tools that I use, not
only because it is a very valuable learning opportunity, but because it
means I (or others) can fix little problems immediately if we need to. The
author has been very responsive so far in addressing user concerns, but no one
person can keep up as well as a community, and we all sometimes get busy, burned
out, or otherwise distracted.



The fact that Hibernate Synchronizer uses its own templates and mechanism to
generate your data access class is both positive and negative. It's positive in
that it gives you more capabilities than Hibernate's "standard" code generation
tools. The ability to work with an auto-generated subclass of your data object
in which you can embed business logic without fear of it getting overwritten
when you regenerate the access code is a big plus. And there are other niceties
offered by the plugin's generated classes that make many of the simple cases
even simpler.



On the other hand, this also means that Hibernate Synchronizer's generated
code can lag behind Hibernate when there are new features added or changes made
to the platform. The plugin's code is also more likely to have bugs in its
support for Hibernate's less-used modes: it has a much smaller user base, and a
single person keeping it updated. You can see evidence of this phenomenon on the
discussion forum.



As with so many things, it's up to you to decide whether the potential
benefits outweigh the risks. Even if you don't use the code generator, you might
find the mapping editor extremely useful. You can turn off automatic
synchronization if you want to just use the editor's completion and assistance
features.



If you do adopt the plugin and find it useful, I would definitely encourage
you to contact the author and thank him, and consider donating some money to
help support its further development.



Other Plugins



In my hunting so far, I've encountered two more plugins that offer support
for Hibernate within Eclipse. (If you know of others, or come across them in the
future, I'd be interested in learning about them.) Perhaps I'll write articles about these in the future.



HiberClipse



The HiberClipse plugin
looks like another very useful tool. It seems geared towards a database-driven
workflow, where you've already got a database schema and want to build a
Hibernate mapping file and Java classes to work with it. This is a common
scenario, and if you find yourself facing such a challenge, I'd definitely
recommend checking out this plugin. One really cool feature it offers is a
graphical "relationships view" of the database you're working with, right within
Eclipse. (I should point out that Hibernate Synchronizer doesn't leave you high
and dry if you want to start with an existing database schema, either. Its New
Mapping File Wizard can connect to your database and build the mapping file
based on what it finds.)



Figure 28

Figure 28. Hibernate Synchronizer's Mapping Wizard



Hibernator



Finally, Hibernator seems to
lean in the opposite direction, starting from your Java code to generate a
simple Hibernate mapping document, and then from there letting you build (or
update) the database schema. It also offers the ability to run database queries
within Eclipse. Of the three plugins, it appears to be at the earliest stages of
development, but already looks worth keeping an eye on, especially since it
cites members of the Hibernate development team as contributors.



Learning More



If I've managed to pique your interest in this article, there are plenty of
resources to help you dig deeper into these topics. In addition to the sites
I've linked to throughout the text, there are some books that might interest you.
Of course, I have to mention my own,
Hibernate: A Developer's
Notebook
. For in-depth reference material about Hibernate, the online documentation is very useful, especially the reference manual, and there is a forthcoming book by the developers of Hibernate itself, Hibernate in Action. I look forward to reading that myself.



As for Eclipse, I'm currently working through Steve Holzner's Eclipse and looking forward to the Eclipse Cookbook that will be released later this month. My blog discusses my Eclipse "conversion" in more detail in case you're curious about that (or teetering on the edge yourself). If you're just getting started, be sure to explore the
"Getting Started" sections of Eclipse's built-in Workbench and Java Development
user guides. These show you how the environment is intended to be used, give you
some good suggestions, and walk you through processes and features you might not
otherwise discover quickly on your own. Choose Help -> Help Contents within Eclipse to find them.




Posted by 아름프로


Caching Dynamic Content with JSP 2.0
Caching Dynamic Content with JSP 2.0


by Andrei Cioroianu

01/05/2005





Content caching is one of the most common optimization techniques used in web applications, and it can be implemented easily. For example, you can use a custom JSP tag--let's call it --to wrap every page fragment that must be cached between and . Any custom tag can control when its body (i.e. the wrapped page fragment) is executed, and the dynamic output can be captured. The tag lets the JSP container (e.g. Tomcat) generate the content only once, storing each cached fragment as a JSP variable in the application scope. Every time the JSP page is executed, the custom tag inserts the cached page fragment without re-executing the JSP code that generated the output. A tag library developed as part of the Jakarta project uses this technique, and it works fine when the cached content doesn't need to be customized for each user or request.



This article improves the technique described above, allowing the JSP page to customize the cached content for each request or user, using the JSP 2.0 Expression Language (EL). Cached page fragments can contain JSP expressions that are not evaluated by the JSP container, the custom tag evaluating these expressions each time the page is executed. Therefore, the creation of the dynamic content is optimized, but the cached fragments can have pieces of content that are generated for each request using the native expression language of JSP. This is possible with the help of the JSP 2.0 EL API, which exposes the expression language to the Java developer.



Content Caching Versus Data Caching



Content caching is not the only option. For example, data extracted from a database can be cached, too. In fact, data caching can be more efficient, since it stores the information without the HTML markup, requiring less memory. In many situations, however, content caching is easier to implement. Let's suppose you have lots of business objects producing some complex data, using significant CPU resources. You also have JSP pages that present the data. Everything works well until one day when the server's load suddenly increases, which requires an urgent solution. Building a caching tier between those business objects and the presentation tier can be a very elegant and efficient solution, but it could be much quicker and easier to modify the JSP pages, caching the dynamic content. Changes in the application's business logic usually require more work and more testing than simple editing of the JSP pages. In addition, there are fewer changes in the web tier when one page aggregates information from multiple sources. The problem is that the cache sometimes needs to be invalidated when the information becomes stale, and the business objects better know when this happens. Therefore, when choosing to implement content caching, data caching, or another optimization technique, you have to take into account many factors, which are sometimes specific to the application you are building.



Data caching and content caching do not necessarily exclude each other. They can be used together; for example, in database-driven applications. Data extracted from the database and the HTML that presents the data can be cached separately. This is similar to using some sort of templates, which are generated on the fly using JSP. The techniques based on the EL API discussed in this article show how you could use the JSP EL to insert the data into the templates for presentation.



Using JSP Variables to Cache Dynamic Content



Every time you implement a caching mechanism, you need a way to store the cached objects, which are strings in the case presented in this article. You could use an object-caching framework, or you might implement a custom caching solution, using Java maps. JSP already provides the so-called "scoped attributes" or "JSP variables," which offer the ID-object mappings needed by the caching mechanism. It doesn't make sense to use the page or request scopes, but the application scope is a good place for storing the cached content, since it's shared by all users and all pages. The session scope can also be used when you need one cache per user, but this isn't very efficient. The JSTL tag library can be used to cache content, using JSP variables as in the following example:



<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>



...



The cached page fragment can be outputted with:



${applicationScope.cachedFragment}


What happens if the cached fragment needs to be customized for each request? For example, if you want to include a counter, you need to cache two fragments:



<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>








...





...



Then, you can output the cached content with:



${cachedFragment1} ${counter} ${cachedFragment2}


It is much easier to cache the page fragments that need customization with the help of a specialized tag library. As already mentioned, the cached content can be wrapped between a start tag () and an end tag (), while each customization is represented by another tag () that outputs a JSP expression (${...}). The dynamic content is cached with JSP expressions that are evaluated each time the cached content is outputted. You'll see how this is implemented in the following sections of the article. The counter.jsp page caches a page fragment containing a counter that is incremented each time the user refreshes the page:



<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@ taglib prefix="jc" uri="http://devsphere.com/articles/jspcache" %>







... ...






Related Reading



Head First Servlets and JSP

Head First Servlets and JSP


Passing the Sun Certified Web Component Developer Exam


By Bryan잹asham, Kathy쟔ierra, Bert잹ates












JSP variables are easy to use and are a good content-caching solution for simple web apps. The lack of control over the cache's size may be a problem, though, if your application produces large amounts of dynamic content. A dedicated caching framework would provide a more robust solution, allowing you to monitor the cache, limit the cache's size, control the caching policy, and so on.



Using the JSP 2.0 Expression Language API



JSP containers (such as Tomcat) evaluate the expressions from the JSP pages using the EL API, which you can use in your Java code, too. This allows you to work with the JSP EL outside of your web pages, for example, in XML files, text-based resources, and custom scripts. The EL API is also useful when you want to control when the expressions from a web page are evaluated or when you build expressions programmatically. For example, cached page fragments can contain JSP expressions for customization and the EL API will be used to evaluate and reevaluate those expressions each time the cached content is outputted.



The example application (see Resources below) provided with this article includes a Java class (JspUtils) that contains a method named eval(), which takes three parameters: a JSP expression, the expected type of the expression's value, and a JSP context object. The eval() method gets an ExpressionEvaluator from the JSP context and calls the evaluate() method, passing the expression, the expected type, and a variable resolver that is obtained from the JSP context. The JspUtils.eval() method returns the value of the expression:



package com.devsphere.articles.jspcache;

import javax.servlet.jsp.JspContext;
import javax.servlet.jsp.JspException;
import javax.servlet.jsp.PageContext;
import javax.servlet.jsp.el.ELException;
import javax.servlet.jsp.el.ExpressionEvaluator;

import java.io.IOException;

public class JspUtils {
public static Object eval(
String expr, Class type, JspContext jspContext)
throws JspException {
try {
if (expr.indexOf("${") == -1)
return expr;
ExpressionEvaluator evaluator
= jspContext.getExpressionEvaluator();
return evaluator.evaluate(expr, type,
jspContext.getVariableResolver(), null);
} catch (ELException e) {
throw new JspException(e);
}
}
...
}


Note that JspUtils.eval() is basically a wrapper around the standard ExpressionEvaluator. If expr doesn't contain ${, the JSP EL API isn't used, since there are no JSP expressions.



Creating the TLD



The JSP tag library needs a Tag Library Descriptor (TLD) file that specifies the names of the custom tags, their attributes, and the Java classes that handle the custom tags. The jspcache.tld file describes the two custom tags. The tag has two attributes: the id for the cached page fragment and the JSP scope where the content should be stored. The tag has only one attribute, which should be a JSP expression that must be evaluated each time the cached fragment is outputted. The TLD file maps the two custom tags to the CacheTag and DynamicTag classes, which are presented in the following sections:





xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee web-jsptaglibrary_2_0.xsd"
version="2.0">

1.0
jc
http://devsphere.com/articles/jspcache


cache
com.devsphere.articles.jspcache.CacheTag
scriptless

id
true
true


scope
false
false




dynamic
com.devsphere.articles.jspcache.DynamicTag
empty

expr
true
false





The TLD file is declared in the web application descriptor (web.xml), which also contains an initialization parameter that indicates whether the cache is enabled or not:





xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee web-app_2_4.xsd"
version="2.4">


com.devsphere.articles.jspcache.enabled
true




http://devsphere.com/articles/jspcache
/WEB-INF/jspcache.tld























Understanding How Works



For each occurrence of the tag in a JSP page, the JSP container creates a CacheTag instance, which is prepared for handling the tag. It is the responsibility of the JSP container to call the setJspContext(), setParent(), and setJspBody() methods that CacheTag inherits from SimpleTagSupport. The JSP container also calls a setter method for each attribute of the handled tag. The setId() and setScope() methods store the attribute values into the private fields, which are initialized with default values in the CacheTag() constructor:



package com.devsphere.articles.jspcache;

import javax.servlet.ServletContext;
import javax.servlet.jsp.JspContext;
import javax.servlet.jsp.JspException;
import javax.servlet.jsp.PageContext;
import javax.servlet.jsp.tagext.SimpleTagSupport;

import java.io.IOException;
import java.io.StringWriter;

public class CacheTag extends SimpleTagSupport {
public static final String CACHE_ENABLED
= "com.devsphere.articles.jspcache.enabled";
private String id;
private int scope;
private boolean cacheEnabled;

public CacheTag() {
id = null;
scope = PageContext.APPLICATION_SCOPE;
}

public void setId(String id) {
this.id = id;
}

public void setScope(String scope) {
this.scope = JspUtils.checkScope(scope);
}
...
}


The setScope() method calls JspUtils.checkScope() to verify the value of the scope attribute, which is converted from String to int:



...
public class JspUtils {
...
public static int checkScope(String scope) {
if ("page".equalsIgnoreCase(scope))
return PageContext.PAGE_SCOPE;
else if ("request".equalsIgnoreCase(scope))
return PageContext.REQUEST_SCOPE;
else if ("session".equalsIgnoreCase(scope))
return PageContext.SESSION_SCOPE;
else if ("application".equalsIgnoreCase(scope))
return PageContext.APPLICATION_SCOPE;
else
throw new IllegalArgumentException(
"Invalid scope: " + scope);
}

}


Once the CacheTag instance is prepared to handle the tag, the JSP container calls the doTag() method, which obtains a JSP context with getJspContext(). This object is casted to PageContext in order to call getServletContext(). The servlet context is used to obtain the value of the initialization parameter that indicates whether the caching mechanism is enabled or not. If the cache is enabled, doTag() tries to get the cached page fragment, using the values of the id and scope attributes. If the page fragment hasn't been cached yet, doTag() uses getJspBody().invoke() to execute the JSP code wrapped between and . The output generated by the JSP body is buffered in a StringWriter and is obtained with toString(). At this point, doTag() calls the setAttribute() method of the JSP context to create a JSP variable that will hold the cached content, which may contain JSP expressions (${...}). Those expressions are evaluated with JspUtils.eval() before the content is outputted with jspContext.getOut().print(). All of these actions take place only if the cache is enabled. Otherwise, doTag() just executes the JSP body with getJspBody().invoke(null) and the output is not cached:



...
public class CacheTag extends SimpleTagSupport {
...
public void doTag() throws JspException, IOException {
JspContext jspContext = getJspContext();
ServletContext application
= ((PageContext) jspContext).getServletContext();
String cacheEnabledParam
= application.getInitParameter(CACHE_ENABLED);
cacheEnabled = cacheEnabledParam != null
&& cacheEnabledParam.equals("true");
if (cacheEnabled) {
String cachedOutput
= (String) jspContext.getAttribute(id, scope);
if (cachedOutput == null) {
StringWriter buffer = new StringWriter();
getJspBody().invoke(buffer);
cachedOutput = buffer.toString();
jspContext.setAttribute(id, cachedOutput, scope);
}
String evaluatedOutput = (String) JspUtils.eval(
cachedOutput, String.class, jspContext);
jspContext.getOut().print(evaluatedOutput);
} else
getJspBody().invoke(null);
}
...
}


Note that a single call of JspUtils.eval() evaluates all ${...} expressions, since a text containing multiple ${...} constructs is an expression, too. Each cached fragment can be processed as a big JSP expression.



The isCacheEnabled() method returns the value of the cacheEnabled flag, which is initialized in doTag():



...
public class CacheTag extends SimpleTagSupport {
...
public boolean isCacheEnabled() {
return cacheEnabled;
}

}


The tag allows the page developer to choose the IDs of the cached page fragments. This opens the possibility to cache a page fragment that is shared by multiple JSP pages, which is helpful when you reuse the JSP code, but this also requires some naming convention to avoid conflicts. If you want to avoid this side effect, you can modify the CacheTag class to include the URL within the ID automatically.



Understanding What Does



Each tag is handled by an instance of the DynamicTag class, whose setExpr() method stores the value of the expr attribute into a private field. The doTag() method builds a JSP expression, adding the ${ prefix and the } suffix to the value of the expr attribute. Then, doTag() uses findAncestorWithClass() to find the CacheTag handler of the element that contains the tag. If the ancestor isn't found or the caching is disabled, the JSP expression is evaluated with JspUtils.eval() and then its value is outputted. Otherwise, doTag() outputs the unevaluated expression:



package com.devsphere.articles.jspcache;

import javax.servlet.jsp.JspException;
import javax.servlet.jsp.tagext.SimpleTagSupport;

import java.io.IOException;

public class DynamicTag extends SimpleTagSupport {
private String expr;

public void setExpr(String expr) {
this.expr = expr;
}

public void doTag() throws JspException, IOException {
String output = "${" + expr + "}";
CacheTag ancestor = (CacheTag) findAncestorWithClass(
this, CacheTag.class);
if (ancestor == null || !ancestor.isCacheEnabled())
output = (String) JspUtils.eval(
output, String.class, getJspContext());
getJspContext().getOut().print(output);
}

}


Analyzing the above code fragments, you'll observe that and cooperate in order to implement a solution that is as efficient as possible. If the cache is enabled, page fragments are buffered together with the JSP expressions that are generated by and evaluated by CacheTag. If the cache is disabled, the buffering is not necessary and just executes its JSP body, letting DynamicTag evaluate the JSP expressions. It is useful to disable the caching, especially during development when the content is changed and the JSP pages are recompiled. Of course, the caching should be enabled in a production environment.



Summary



Content caching is a very easy way to improve the performance of your web applications. This article focused on customizing the cached content for each user or request, using the JSP Expression Language. The simple tag library presented throughout the article is suitable for small web apps and could be enhanced for medium ones. If you develop large enterprise applications, you should consider using a framework that provides a better caching mechanism than the use of the JSP variables, but you'll probably still find useful the customization technique based on the EL API.



Resources






Andrei Cioroianu
is the founder of Devsphere and an author of many Java articles published by ONJava, JavaWorld, and Java Developer's Journal.




Posted by 아름프로


Mock Objects in Unit Tests
Mock Objects in Unit Tests


by Lu Jian

01/12/2005





The use of mock objects is a widely employed unit testing strategy. It shields external and unnecessary factors from testing and
helps developers focus on a specific function to be tested.



EasyMock is a well-known mock tool that can create a mock object for a
given interface at runtime. The mock object's behavior can be defined prior encountering the test code in the
test case. EasyMock is based on java.lang.reflect.Proxy, which can create dynamic proxy classes/objects
according to given interfaces. But it has an inherent limitation from its use of Proxy: it can create
mock objects only for interfaces.



Mocquer is a similar mock tool, but one that extends the functionality of
EasyMock to support mock object creation for classes as well as interfaces.




Introduction to Mocquer



Mocquer is based on the Dunamis project, which is used to generate dynamic delegation
classes/objects for specific interfaces/classes. For convenience, it follows the class and method naming conventions of EasyMock, but uses a
different approach internally.



MockControl is the main class in the Mocquer project. It is used to control the the mock object life cycle
and behavior definition. There are four kinds methods in this class.



  • Life Cycle Control Methods

    public void replay();
    public void verify();
    public void reset();


    The mock object has three states in its life cycle: preparing, working, and
    checking. Figure 1 shows the mock object life cycle.



    Mock Object Life Cycle

    Figure 1. Mock object life cycle


    Initially, the mock object is in the preparing state. The mock object's behavior can be defined in this state.
    replay() changes the mock object's state to the working state. All method invocations on the
    mock object in this state will follow the behavior defined in the preparing state. After verify()
    is called, the mock object is in the checking state. MockControl will compare the mock object's predefined behavior
    and actual behavior to see whether they match. The match rule depends on which kind of MockControl is
    used; this will be explained in a moment. The developer can use replay() to reuse
    the predefined behavior if needed. Call reset(), in any state, to clear the history state and
    change to the initial preparing state.



  • Factory Methods

    public static MockControl createNiceControl(...);
    public static MockControl createControl(...);
    public static MockControl createStrictControl(...);


    Mocquer provides three kinds of MockControls: Nice, Normal, and Strict.
    The developer can choose an appropriate MockControl in his or her test case, according to what is to be tested (the test point) and how the test
    will be carried out (the test strategy).
    The Nice MockControl is the loosest. It does not care about the order of method invocation on the mock object, or about unexpected method invocations, which just return a default value (that depends on the method's return value).
    The Normal MockControl is stricter than the Nice MockControl, as an unexpected method invocation on the mock object will lead to
    an AssertionFailedError. The Strict MockControl is, naturally, the strictest. If the order of method invocation on the
    mock object in the working state is different than that in the preparing state, an AssertionFailedError will be
    thrown.
    The table below shows the differences between these three kinds of MockControl.






















      Nice Normal Strict
    Unexpected Order Doesn't care Doesn't care AssertionFailedError
    Unexpected Method Default value AssertionFailedError AssertionFailedError

    There are two versions for each factory method.





    public static MockControl createXXXControl(Class clazz);
    public static MockControl createXXXControl(Class clazz,
    Class[] argTypes, Object[] args);



    If the class to be mocked is an interface or it has a public/protected default constructor, the first version is
    enough. Otherwise, the second version factory method is used to specify the signature and provide arguments to the desired constructor. For example, assuming ClassWithNoDefaultConstructor is a class without a default
    constructor:




    public class ClassWithNoDefaultConstructor {
    public ClassWithNoDefaultConstructor(int i) {
    ...
    }
    ...
    }


    The MockControl can be obtained through:




    MockControl control = MockControl.createControl(
    ClassWithNoDefaultConstructor.class,
    new Class[]{Integer.TYPE},
    new Object[]{new Integer(0)});


  • Mock object getter method

    public Object getMock();


    Each MockControl contains a reference to the generated mock object. The developer can use this method to
    get the mock object and cast it to the real type.




    //get mock control
    MockControl control = MockControl.createControl(Foo.class);
    //Get the mock object from mock control
    Foo foo = (Foo) control.getMock();



  • Behavior definition methods

    public void setReturnValue(... value);
    public void setThrowable(Throwable throwable);
    public void setVoidCallable();
    public void setDefaultReturnValue(... value);
    public void setDefaultThrowable(Throwable throwable);
    public void setDefaultVoidCallable();
    public void setMatcher(ArgumentsMatcher matcher);
    public void setDefaultMatcher(ArgumentsMatcher matcher);


    MockControl allows the developer to define the mock object's behavior per each method invocation on it. When in
    the preparing state, the developer can call one of the mock object's methods first to specify which method invocation's
    behavior is to be defined. Then, the developer can use one of the behavior definition methods to specify the behavior. For example,
    take the following Foo class:




    //Foo.java
    public class Foo {
    public void dummy() throw ParseException {
    ...
    }
    public String bar(int i) {
    ...
    }
    public boolean isSame(String[] strs) {
    ...
    }
    public void add(StringBuffer sb, String s) {
    ...
    }
    }


    The behavior of the mock object can be defined as in the following:


    //get mock control
    MockControl control = MockControl.createControl(Foo.class);
    //get mock object
    Foo foo = (Foo)control.getMock();
    //begin behavior definition

    //specify which method invocation's behavior
    //to be defined.
    foo.bar(10);
    //define the behavior -- return "ok" when the
    //argument is 10
    control.setReturnValue("ok");
    ...

    //end behavior definition
    control.replay();
    ...


    Most of the more than 50 methods in MockControl are behavior definition methods. They can
    be grouped into following categories.




    • setReturnValue()

      These methods are used to specify that the last method invocation should return a value as the parameter. There
      are seven versions of setReturnValue(), each of which takes a primitive type as its parameter, such as
      setReturnValue(int i) or setReturnValue(float f). setReturnValue(Object obj) is used for a method that takes an object instead of a primitive. If the given value does not match the method's return type, an AssertionFailedError will be
      thrown.



      It is also possible to add the number of expected invocations into the behavior definition. This is called the invocation times limitation.




      MockControl control = ...
      Foo foo = (Foo)control.getMock();
      ...
      foo.bar(10);
      //define the behavior -- return "ok" when the
      //argument is 10. And this method is expected
      //to be called just once.
      setReturnValue("ok", 1);
      ...


      The code segment above specifies that the method invocation, bar(10), can only occur once. How about
      providing a range?




      ...
      foo.bar(10);
      //define the behavior -- return "ok" when the
      //argument is 10. And this method is expected
      //to be called at least once and at most 3
      //times.
      setReturnValue("ok", 1, 3);
      ...


      Now bar(10) is limited to be called at least once and at most, three times. More appealingly, a Range
      can be given to specify the limitation.




      ...
      foo.bar(10);
      //define the behavior -- return "ok" when the
      //argument is 10. And this method is expected
      //to be called at least once.
      setReturnValue("ok", Range.ONE_OR_MORE);
      ...


      Range.ONE_OR_MORE is a pre-defined Range instance, which means the method should be called at least once.
      If there is no invocation-count limitation specified in setReturnValue(), such as setReturnValue("Hello"),
      it will use Range.ONE_OR_MORE as its default invocation-count limitation.
      There are another two predefined Range instances: Range.ONE (exactly once) and
      Range.ZERO_OR_MORE (there's no limit on how many times you can call it).



      There is also a special set return value method: setDefaultReturnValue(). It defines the return value
      of the method invocation despite the method parameter values. The invocation times limitation is Range.ONE_OR_MORE.
      This is known as the method parameter values insensitive feature.




      ...
      foo.bar(10);
      //define the behavior -- return "ok" when calling
      //bar(int) despite the argument value.
      setDefaultReturnValue("ok");
      ...



    • setThrowable

      setThrowable(Throwable throwable) is used to define the method invocation's exception throwing behavior. If the given throwable does not match the exception declaration of the method, an
      AssertionFailedError will be thrown. The invocation times limitation and method parameter values
      insensitive features can also be applied.




      ...
      try {
      foo.dummy();
      } catch (Exception e) {
      //skip
      }
      //define the behavior -- throw ParseException
      //when call dummy(). And this method is expected
      //to be called exactly once.
      control.setThrowable(new ParseException("", 0), 1);
      ...



    • setVoidCallable()

      setVoidCallable() is used for a method that has a void return type. The invocation
      times limitation and method parameter values insensitive features can also be applied.




      ...
      try {
      foo.dummy();
      } catch (Exception e) {
      //skip
      }
      //define the behavior -- no return value
      //when calling dummy(). And this method is expected
      //to be called at least once.
      control.setVoidCallable();
      ...


    • Set ArgumentsMatcher

      In the working state, the MockControl will search the predefined behavior when any method invocation has happened
      on the mock object. There are three factors in the search criteria: method signature, parameter value, and invocation
      times limitation. The first and third factors are fixed. The second factor can be skipped by the parameter values
      insensitive feature described above. More flexibly, it is also possible to customize the parameter value match rule.
      setMatcher() can be used in the preparing state with a customized ArgumentsMatcher.




      public interface ArgumentsMatcher {
      public boolean matches(Object[] expected,
      Object[] actual);
      }

      The only method in ArgumentsMatcher, matches(), takes two arguments. One is the expected
      parameter values array (null, if the parameter values insensitive feature applied). The other is the actual parameter
      values array. A true return value means that the parameter values match.




      ...
      foo.isSame(null);
      //set the argument match rule -- always match
      //no matter what parameter is given
      control.setMatcher(MockControl.ALWAYS_MATCHER);
      //define the behavior -- return true when call
      //isSame(). And this method is expected
      //to be called at least once.
      control.setReturnValue(true, 1);
      ...


      There are three predefined ArgumentsMatcher instances in MockControl.
      MockControl.ALWAYS_MATCHER always returns true when matching, no matter what parameter
      values are given. MockControl.EQUALS_MATCHER calls equals() on each element
      in the parameter value array. MockControl.ARRAY_MATCHER is almost the same as
      MockControl.EQUALS_MATCHER, except that it calls Arrays.equals() instead of
      equals() when the element in the parameter value array is an array type. Of course, the developer
      can implement his or her own ArgumentsMatcher.



      A side effect of a customized ArgumentsMatcher is that it defines the method invocation's out
      parameter value.




      ...
      //just to demonstrate the function
      //of out parameter value definition
      foo.add(new String[]{null, null});
      //set the argument match rule -- always
      //match no matter what parameter given.
      //Also defined the value of out param.
      control.setMatcher(new ArgumentsMatcher() {
      public boolean matches(Object[] expected,
      Object[] actual) {
      ((StringBuffer)actual[0])
      .append(actual[1]);
      return true;
      }
      });
      //define the behavior of add().
      //This method is expected to be called at
      //least once.
      control.setVoidCallable(true, 1);
      ...


      setDefaultMatcher() sets the MockControl's default ArgumentsMatcher instance. If no
      specific ArgumentsMatcher is given, the default ArgumentsMatcher will be used. This
      method should be called before any method invocation behavior definition. Otherwise, an
      AssertionFailedError will be thrown.


      //get mock control
      MockControl control = ...;
      //get mock object
      Foo foo = (Foo)control.getMock();

      //set default ArgumentsMatcher
      control.setDefaultMatcher(
      MockControl.ALWAYS_MATCHER);
      //begin behavior definition
      foo.bar(10);
      control.setReturnValue("ok");
      ...


      If setDefaultMatcher() is not used,
      MockControl.ARRAY_MATCHER
      is the system default
      ArgumentsMatcher.























JUnit Pocket Guide

Related Reading


JUnit Pocket Guide


By Kent잹eck
































An Example


Below is an example that demonstrates Mocquer's usage in unit testing.


Suppose there is a class named FTPConnector.




package org.jingle.mocquer.sample;

import java.io.IOException;
import java.net.SocketException;

import org.apache.commons.net.ftp.FTPClient;

public class FTPConnector {
//ftp server host name
String hostName;
//ftp server port number
int port;
//user name
String user;
//password
String pass;

public FTPConnector(String hostName,
int port,
String user,
String pass) {
this.hostName = hostName;
this.port = port;
this.user = user;
this.pass = pass;
}

/**
* Connect to the ftp server.
* The max retry times is 3.
* @return true if succeed
*/
public boolean connect() {
boolean ret = false;
FTPClient ftp = getFTPClient();
int times = 1;
while ((times <= 3) && !ret) {
try {
ftp.connect(hostName, port);
ret = ftp.login(user, pass);
} catch (SocketException e) {
} catch (IOException e) {
} finally {
times++;
}
}
return ret;
}

/**
* get the FTPClient instance
* It seems that this method is a nonsense
* at first glance. Actually, this method
* is very important for unit test using
* mock technology.
* @return FTPClient instance
*/
protected FTPClient getFTPClient() {
return new FTPClient();
}
}


The connect() method can try to connect to an FTP server and log in. If it fails, it can retry up to three times.
If the operation succeeds, it returns true. Otherwise, it returns false. The class uses org.apache.commons.net.FTPClient
to make a real connection. There is a protected method, getFTPClient(), in this class that looks like nonsense at first glance. Actually, this method is very important for unit testing using mock technology. I will explain
that later.



A JUnit test case, FTPConnectorTest, is provided to test the connect() method logic.
Because we want to isolate the unit test environment from any other factors such as an external FTP server, we use
Mocquer to mock the FTPClient.




package org.jingle.mocquer.sample;

import java.io.IOException;

import org.apache.commons.net.ftp.FTPClient;
import org.jingle.mocquer.MockControl;

import junit.framework.TestCase;

public class FTPConnectorTest extends TestCase {

/*
* @see TestCase#setUp()
*/
protected void setUp() throws Exception {
super.setUp();
}

/*
* @see TestCase#tearDown()
*/
protected void tearDown() throws Exception {
super.tearDown();
}

/**
* test FTPConnector.connect()
*/
public final void testConnect() {
//get strict mock control
MockControl control =
MockControl.createStrictControl(
FTPClient.class);
//get mock object
//why final? try to remove it
final FTPClient ftp =
(FTPClient)control.getMock();

//Test point 1
//begin behavior definition
try {
//specify the method invocation
ftp.connect("202.96.69.8", 7010);
//specify the behavior
//throw IOException when call
//connect() with parameters
//"202.96.69.8" and 7010. This method
//should be called exactly three times
control.setThrowable(
new IOException(), 3);
//change to working state
control.replay();
} catch (Exception e) {
fail("Unexpected exception: " + e);
}

//prepare the instance
//the overridden method is the bridge to
//introduce the mock object.
FTPConnector inst = new FTPConnector(
"202.96.69.8",
7010,
"user",
"pass") {
protected FTPClient getFTPClient() {
//do you understand why declare
//the ftp variable as final now?
return ftp;
}
};
//in this case, the connect() should
//return false
assertFalse(inst.connect());

//change to checking state
control.verify();

//Test point 2
try {
//return to preparing state first
control.reset();
//behavior definition
ftp.connect("202.96.69.8", 7010);
control.setThrowable(
new IOException(), 2);
ftp.connect("202.96.69.8", 7010);
control.setVoidCallable(1);
ftp.login("user", "pass");
control.setReturnValue(true, 1);
control.replay();
} catch (Exception e) {
fail("Unexpected exception: " + e);
}

//in this case, the connect() should
//return true
assertTrue(inst.connect());

//verify again
control.verify();
}
}

A strict MockObject is created. The mock object variable declaration has a final modifier because the variable
will be used in the inner anonymous class. Otherwise, a compilation error will be reported.




There are two test points in the test method. The first test point is when FTPClient.connect() always throws an
exception, meaning FTPConnector.connect() will return false as result.




try {
ftp.connect("202.96.69.8", 7010);
control.setThrowable(new IOException(), 3);
control.replay();
} catch (Exception e) {
fail("Unexpected exception: " + e);
}


The MockControl specifies that, when calling connect() on the mock object with the parameters 202.96.96.8 as the host IP and
7010 as the port number, an IOException will be thrown. This method invocation is expected to be called exactly
three times. After the behavior definition, replay() changes the mock object to the working state. The try/catch
block here is to follow the declaration of FTPClient.connect(), which has an IOException defined
in its throw clause.




FTPConnector inst = new FTPConnector("202.96.69.8",
7010,
"user",
"pass") {
protected FTPClient getFTPClient() {
return ftp;
}
};


The code above creates a FTPConnector instance with its getFTPClient() overridden. It is a bridge to
introduce the created mock object into the target to be tested.




assertFalse(inst.connect());


The expected result of connect() should be false on this test point.




control.verify();


Finally, change the mock object to the checking state.




The second test point is when FTPClient.connect() throws exceptions two times and succeeds on the third time,
and FTPClient.login() also succeeds, meaning FTPConnector.connect() will return true as result.




This test point follows the procedure of previous test point, except that the MockObject should change to the preparing state first,
using reset().




Conclusion



Mock technology isolates the target to be tested from other external factors. Integrating mock technology
into the JUnit framework makes the unit test much simpler and neater. EasyMock is a good mock tool that can
create a mock object for a specified interface. With the help of Dunamis, Mocquer extends the function of EasyMock.
It can create mock objects not only for interfaces, but also classes. This article gave a brief introduction to
Mocquer's usage in unit testing. For more detailed information, please refer to the references below.



References





Lu Jian
is a senior Java architect/developer with four years of Java development experience.




Posted by 아름프로


Parsing an XML Document with XPath
Parsing an XML Document with XPath


by Deepak Vohra

01/12/2005






The getter methods in the org.w3c.dom package API are commonly used to parse an XML document. But J2SE 5.0 also provides the javax.xml.xpath package to parse an XML document with the XML Path Language (XPath) . The JDOM org.jdom.xpath.XPath class also has methods to select XML document node(s) with an XPath expression, which consists of a location path of an XML document node or a list of nodes.





Parsing an XML document with an XPath expression is more efficient than the getter methods, because with XPath expressions, an Element node may be selected without iterating over a node list. Node lists retrieved with the getter methods have to be iterated over to retrieve the value of element nodes. For example, the second article node in the journal node in the example XML document in this tutorial (listed in the Overview section below) may be retrieved with the XPath expression:



Element article=(Element)
(xPath.evaluate("/catalog/journal/article[2]/title",
inputSource,
XPathConstants.NODE));



In the code snippet, xPath is an javax.xml.xpath.XPath class object, and inputSource is an InputSource object for an XML document. With the org.w3c.dom package getter methods, the second article node in the journal node is retrieved with the code snippet:



Document document;
NodeList nodeList=document.getElementsByTagName("journal");
Element journal=(Element)(nodeList.item(0));
NodeList nodeList2=journal.getElementsByTagName("article");
Element article=(Element)nodeList2.item(1);


Also, with an XPath expression, an Attribute node may be selected directly, in comparison to the getter methods, in which an Element node is required to be evaluated before an Attribute node is evaluated. For example, the value of the level attribute for the article node with the date January-2004 is retrieved with an XPath expression:



String level = 
xPath.evaluate("/catalog/journal/article[@date='January-2004']/@level",
inputSource);


By comparison, the org.w3c.dom package makes you retrieve the org.w3c.dom.Element object for the article, and then get its level attribute with:



String level=article.getAttribute("level");


Overview




In this tutorial, an example XML document is parsed with J2SE 5.0's XPath class and JDOM's XPath class. XML document nodes are selected with XPath expressions. Depending on the XPath expression evaluated, the nodes selected are either org.w3c.dom.Element nodes or org.w3c.dom.Attribute nodes. The example XML document, catalog.xml, is listed below:



<?xml version="1.0" encoding="UTF-8"?> 
<catalog xmlns:journal="http://www.w3.org/2001/XMLSchema-Instance" >
<journal:journal title="XML" publisher="IBM developerWorks">
<article journal:level="Intermediate"
date="February-2003">
<title>Design XML Schemas Using UML</title>
<author>Ayesha Malik</author>
</article>
</journal:journal>
<journal title="Java Technology" publisher="IBM
developerWorks">
<article level="Advanced" date="January-2004">
<title>Design service-oriented architecture
frameworks with J2EE technology</title>
<author>Naveen Balani </author>
</article>
<article level="Advanced" date="October-2003">
<title>Advance DAO Programming</title>
<author>Sean Sullivan </author>
</article>

</journal>
</catalog>






The example XML document has a namespace declaration, xmlns:journal="http://www.w3.org/2001/XMLSchema-instance", for elements in the journal prefix namespace.



This article is structured into the following sections:




  1. Preliminary Setup


  2. Parsing with the JDK 5.0 XPath Class


  3. Parsing with the JDOM XPath Class





Preliminary Setup




To use J2SE 5.0's XPath support, the javax.xml.xpath package needs to be in the CLASSPATH. Install the new version of the J2SE 5.0 SDK. To parse an XML document with the JDK 5.0 XPath class, add the <JDK5.0>\jre\lib\rt.jar file to the CLASSPATH variable, if it's not already in the CLASSPATH. <JDK5.0> is the directory in which JDK 5.0 is installed.



The org.apache.xpath.NodeSet class is required in the CLASSPATH. Install Xalan-Java; extract xalan-j-current-bin.jar to a directory. Add <Xalan>/bin/xalan.jar to the CLASSPATH, where <Xalan> is the directory in which Xalan-Java is installed.





To parse an XML document with the JDOM XPath class, the JDOM API classes need to be in the CLASSPATH. Install JDOM; extract the jdom-b9.zip file to an installation directory. Add <JDOM>/jdom-b9/build/jdom.jar, <JDOM>/jdom-b9/lib/saxpath.jar, <JDOM>/jdom-b9/lib/jaxen-core.jar, <JDOM>/jdom-b9/lib/jaxen-jdom.jar, and <JDOM>/jdom-b9/lib/xerces.jar to the CLASSPATH variable, where <JDOM> is the directory in which JDOM is installed.




Parsing with the JDK 5.0 XPath Class



The javax.xml.xpath package in J2SE 5.0 has classes and interfaces to parse an XML document with XPath. Some of the classes and interfaces in JDK 5.0 are listed in the following table:






















Class/Interface

Description

XPath (interface)

Provides access to the XPath evaluation environment. Provides the evaluate methods to evaluate XPath expressions in an XML document.

XPathExpression (interface)

Provides the evaluate methods to evaluate compiled XPath expressions in an XML document.

XpathFactory (class)

Used to create an XPath object.





In this section, the example XML document is evaluated with the javax.xml.xpath.XPath class. First, import the javax.xml.xpath package.



import javax.xml.xpath.*;


The evaluate methods in the XPath and XPathExpression interfaces are used to parse an XML document with XPath expressions. The XPathFactory class is used to create an XPath object. Create an XPathFactory object with the static newInstance method of the XPathFactory class.



XPathFactory  factory=XPathFactory.newInstance();


Create an XPath object from the XPathFactory object with the newXPath method.



XPath xPath=factory.newXPath();


Create and compile an XPath expression with the compile method of the XPath object. As an example, select the title of the article with its date attribute set to January-2004. An attribute in an XPath expression is specified with an @ symbol. For further reference on XPath expressions, see the XPath specification for examples on creating an XPath expression.



XPathExpression  xPathExpression=
xPath.compile("/catalog/journal/article[@date='January-2004']/title");


Create an InputSource for the example XML document. An InputSource is a input class for an XML entity. The evaluate method of the XPathExpression interface evaluates either an InputSource or a node/node list of the types org.w3c.dom.Node, org.w3c.dom.NodeList, or org.w3c.dom.Document.



InputSource inputSource = 
new InputSource(new
FileInputStream(xmlDocument)));


xmlDocument is the java.io.File object of the example XML document.



File xmlDocument = 
new File("c:/catalog/catalog.xml");


Evaluate the XPath expression with the InputSource of the example XML document to evaluate over.



String title = 
xPathExpression.evaluate(inputSource);


The result of the XPath expression evaluation is the title: Design service-oriented architecture frameworks with J2EE technology. The XPath object may be directly evaluated to evaluate the value of an XPath expression in an XML document without first compiling an XPath expression. Create an InputSource.



inputSource = 
new InputSource(new FileInputStream(xmlDocument)));


As an example, evaluate the value of the publisher node in the journal element.




String publisher = 
xPath.evaluate("/catalog/journal/@publisher", inputSource);


The result of the XPath object evaluation is the attribute value: IBM developerWorks. The evaluate method in the XPath class may also be used to evaluate a node set. For example, select the node or set of nodes that correspond to the article element nodes in the XML document. Create the XPath expression that represents a node set.



String expression="/catalog/journal/article";


Select the node set of article element nodes in the example XML document with the evaluate method of the XPath object.



NodeSet nodes = 
(NodeSet) xPath.evaluate(expression,
inputSource, XPathConstants.NODESET);


XpathConstants.NODESET specifies the return type of the evaluate method as a NodeSet. The return type may also be set to NODE, STRING, BOOLEAN or NUMBER. The NodeSet class implements the NodeList interface. To parse the nodes in the node set, cast the NodeSet object to NodeList.



NodeList nodeList=(NodeList)nodes;


Thus, nodes in an XML document get selected and evaluated without iterating over the getter methods of the org.w3c.dom API. The example program XPathEvaluator.java is used to parse an XML document with the JDK 5.0 XPath class.





Parsing with the JDOM XPath Class



The JDOM API XPath class supports XPath expression to select nodes from an XML document. Some of the methods in the JDOM XPath class are illustrated in the following table:























XPath Class Method

Description

selectSingleNode

Used to select a single node that matches an XPath expression.

selectNodes

Used to select a list of nodes that match an XPath expression.

addNamespace

Used to add a namespace to match an XPath expression with namespace prefixes.




In this section, the procedure to select nodes from the example XML document catalog.xml with the JDOM XPath class shall be discussed. The node/nodes selected by the select methods are modified, and the modified document is output to an XML document. First, import the JDOM org.jdom.xpath package classes.



import org.jdom.xpath.*; 


Create a SAXBuilder.



SAXBuilder saxBuilder = 
new SAXBuilder("org.apache.xerces.parsers.SAXParser");


Parse the XML document catalog.xml with the SAXBuilder.



org.jdom.Document jdomDocument =
saxBuilder.build(xmlDocument);



xmlDocument is the java.io.File representation of the XML document catalog.xml. The static method selectSingleNode(java.lang.Object context, String XPathExpression) selects a single node specified by an XPath expression. If more than one nodes match the XPath expression, the first node that matches the XPath expression gets selected. Select the attribute node level of an element article in a journal with title set to Java Technology, and with article attribute date set to January-2004, with an XPath expression.



org.jdom.Attribute levelNode = 
(org.jdom.Attribute)(XPath.selectSingleNode(
jdomDocument,
"/catalog//journal[@title='JavaTechnology']" +
"//article[@date='January-2004']/@level"));



The level attribute value Advanced gets selected. Modify the level node.



levelNode.setValue("Intermediate");



The selectSingleNode method may also be used to select an element node in an XML document. As an example, select a title node. Select the title node with an XPath expression.



org.jdom.Element titleNode = 
(org.jdom.Element) XPath.selectSingleNode( jdomDocument,
"/catalog//journal//article[@date='January-2004']/title");


The title node with value Design service-oriented architecture frameworks with J2EE technology gets selected. Modify the title node.



titleNode.setText(
"Service Oriented Architecture Frameworks");



The static method selectNodes(java.lang.Object context, String XPathExpression) selects all of the nodes specified by an XPath expression. Select all of the article nodes for the journal with a title set to Java Technology.



java.util.List nodeList =
XPath.selectNodes(jdomDocument,
"/catalog//journal[@title='Java Technology']//article");


Modify the article nodes. Add an attribute to the article nodes.




Iterator iter=nodeList.iterator();
while(iter.hasNext()) {
org.jdom.Element element =
(org.jdom.Element) iter.next();
element.setAttribute("section", "Java Technology");
}


The JDOM XPath class supports selection of nodes with namespace prefixes. To select a node with a namespace, add a namespace to an XPath:




XPath xpath = 
XPath.newInstance(
"/catalog//journal:journal//article/@journal:level");
xpath.addNamespace("journal",
"http://www.w3.org/2001/XMLSchema-Instance"
);


A namespace with the prefix journal gets added to the XPath object. Select a node with a namespace prefix:




levelNode = (org.jdom.Attribute)
xpath.selectSingleNode(jdomDocument);


The attribute node journal:level gets selected. Modify the journal:level node.




levelNode.setValue("Advanced");


The Java program JDomParser.java is used to select nodes from the catalog.xml XML document. In this section, the procedure to select nodes from an XML document with the JDOM XPath class select methods was explained. The nodes selected are modified. The modified document is output to a XML document with the XMLOutputter class. catalog-modified.xml is the output XML document.




Conclusion



In this tutorial, an XML document was parsed with XPath. XPath is used only to select nodes. XPath APIs discussed in this tutorial do not have the provision to set values for XML document nodes with XPath. To set values for nodes, the setter methods of the org.w3c.dom package are required.




Resources





Deepak Vohra
is a NuBean consultant and a web developer.




Posted by 아름프로


Internals of Java Class Loading
Internals of Java Class Loading


by Binildas Christudas

01/26/2005






Class loading is one of the most powerful mechanisms provided by
the Java language specification. Even though the internals of class loading
falls under the "advanced topics" heading, all Java programmers should know how
the mechanism works and what can be done with it to suit their
needs. This can save time that would otherwise have been spent debugging
ClassNotFoundException,
ClassCastException, etc.



This article starts from the basics, such as
the difference between code and data, and how they are related to form an instance
or object. Then it looks into the mechanism of loading code into the JVM with the
help of class loaders, and the main type of class loaders available in Java.
The article then looks into the internals of class loaders, where we cover using the
basic algorithm (or probing), followed by class loaders before it loads a class.
The next section of the article uses code examples to demonstrate the necessity
for developers to extend and develop their own class loaders. This is followed
by explanation on writing your own class loaders and how to use them to make a generic
task-execution engine that can be used to load the code supplied by any remote client,
define it in the JVM, and instantiate and then execute it. The article concludes with
references to J2EE-specific components where custom class loading schemas
becomes the norm.



Class and Data





A class represents the code to be executed, whereas data represents the state
associated with that code. State can change; code generally does not.
When we associate a particular
state to a class, we have an instance of that class. So different instances
of the same class can have different state, but all refer to the same code.
In Java, a class will usually have its code contained in a .class
file, though there are exceptions. Nevertheless, in the Java runtime,
each and every class will have its code also available in the form of a first-class Java object, which is an instance of
java.lang.Class.
Whenever we compile any Java file, the compiler will embed a public, static,
final field named class, of the type
java.lang.Class, in the emitted byte code. Since this field is
public, we can access it using dotted notation, like this:



java.lang.Class klass = Myclass.class;




Once a class is loaded into a JVM, the same class (I repeat, the same class)
will not be loaded again. This leads to the question of what is meant by "the same class."
Similar to the condition that an object has a specific state, an identity,
and that an object is always associated with its code (class), a class loaded
into a JVM also has a specific identity, which we'll look at now.




In Java, a class is identified by its fully qualified class name. The fully
qualified class name consists of the package name and the class name. But
a class is uniquely identified in a JVM using its fully qualified class name
along with the instance of the ClassLoader that loaded the class.
Thus, if a class named Cl in the package Pg is loaded by an instance kl1
of the class loader KlassLoader, the class instance of C1, i.e. C1.class is keyed
in the JVM as (Cl, Pg, kl1).
This means that the two class loader instances (Cl, Pg, kl1) and (Cl, Pg, kl2) are not
one and the same, and classes loaded by them are also completely different
and not type-compatible to each other. How many class loader
instances do we have in a JVM? The next section explains this.



Class Loaders




In a JVM, each and every class is loaded by some instance of a
java.lang.ClassLoader. The ClassLoader class is located in
the java.lang package and developers are free to subclass
it to add their own functionality to class loading.




Whenever a new JVM is started by typing java MyMainClass, the "bootstrap class loader" is responsible for loading key Java classes
like java.lang.Object and other runtime code into memory first.
The runtime classes are packaged inside of the JRE\lib\rt.jar file. We cannot
find the details of the bootstrap class loader in the Java documentation, since
this is a native implementation. For the same reason, the behavior of
the bootstrap class loader will also differ across JVMs.




In a related note, we will get null if we try to get the class loader of a core Java runtime class, like this:




log(java.lang.String.class.getClassLoader());



Next comes the Java extension class loader. We can store extension libraries,
those that provide features that go beyond the core Java runtime code,
in the path given by the
java.ext.dirs property. The ExtClassLoader is responsible
for loading all .jar files kept in the java.ext.dirs path.
A developer can add his or her own application .jar files or whatever libraries
he or she might need to add to the classpath to this extension directory
so that they will be loaded by the extension class loader.





The third and most important class loader from the developer perspective
is the AppClassLoader. The application class loader is responsible for
loading all of the classes kept in the path corresponding to the
java.class.path system property.




"Understanding Extension Class Loading" in Sun's Java tutorial explains more on the above three
class loader paths. Listed below are a few other class loaders in the JDK:




  • java.net.URLClassLoader

  • java.security.SecureClassLoader

  • java.rmi.server.RMIClassLoader

  • sun.applet.AppletClassLoader




java.lang.Thread, contains the method public ClassLoader getContextClassLoader(), which returns the context class loader for a particular thread. The context
class loader is provided by the creator of the thread for use by code running in
this thread when loading classes and resources. If it is not set, the default is the
class loader context of the parent thread. The context class loader of the primordial
thread is typically set to the class loader used to load the application.



How Class Loaders Work




All class loaders except the bootstrap class loader have a parent class loader.
Moreover, all class loaders are of the type java.lang.ClassLoader.
The above two statements are different, and very important for the correct
working of any class loaders written by developers. The most important
aspect is to correctly set the parent class loader. The parent class loader
for any class loader is the class loader instance that loaded that class loader.
(Remember, a class loader is itself a class!)



A class is requested out of a class loader using the
loadClass() method. The internal working of this method can be
seen from the source code for java.lang.ClassLoader, given below:




protected synchronized Class<?> loadClass
(String name, boolean resolve)
throws ClassNotFoundException{

// First check if the class is already loaded
Class c = findLoadedClass(name);
if (c == null) {
try {
if (parent != null) {
c = parent.loadClass(name, false);
} else {
c = findBootstrapClass0(name);
}
} catch (ClassNotFoundException e) {
// If still not found, then invoke
// findClass to find the class.
c = findClass(name);
}
}
if (resolve) {
resolveClass(c);
}
return c;
}



To set the parent class loader, we have two ways to do so in the ClassLoader constructor:




public class MyClassLoader extends ClassLoader{

public MyClassLoader(){
super(MyClassLoader.class.getClassLoader());
}
}



or




public class MyClassLoader extends ClassLoader{

public MyClassLoader(){
super(getClass().getClassLoader());
}
}



The first method is preferred because calling the method getClass()
from within the constructor should be discouraged, since the object
initialization will be complete only at the exit of the constructor code.
Thus, if the parent class loader is correctly set, whenever a class is
requested out of a ClassLoader instance, if it cannot find the class, it
should ask the parent first. If the parent cannot find it (which again
means that its parent also cannot find the class, and so on), and if the
findBootstrapClass0() method also fails, the
findClass() method is invoked. The default implementation
of findClass() will throw ClassNotFoundException
and developers are expected to implement this method when they subclass
java.lang.ClassLoader to make custom class loaders. The
default implementation of findClass() is shown below.




protected Class<?> findClass(String name)
throws ClassNotFoundException {
throw new ClassNotFoundException(name);
}



Inside of the findClass() method, the class loader needs to fetch
the byte codes from some arbitrary source. The source can be the file system, a network
URL, a database, another application that can spit out byte codes on the fly, or
any similar source that is capable of generating byte code compliant with the
Java byte code specification. You could even use BCEL
(Byte Code Engineering Library), which provides convenient methods to create classes
from scratch at runtime. BCEL is being used successfully in several projects
such as compilers, optimizers, obsfuscators, code generators, and analysis tools.
Once the byte code is retrieved, the method should
call the defineClass() method, and the runtime is very particular
about which ClassLoader instance calls this method. Thus, if two ClassLoader
instances define byte codes from the same or different sources, the defined classes
are different.




The
Java language specification
gives a detailed explanation on the process of
loading,
linking, and the
initialization
of classes and interfaces in the Java Execution Engine.




Figure 1 shows an application with a main class called MyMainClass. As explained
earlier, MyMainClass.class will be loaded by the AppClassLoader. MyMainClass creates
instances of two class loaders, CustomClassLoader1 and CustomClassLoader2, which
are capable of finding the byte codes of a fourth class called Target from some
source (say, from a network path). This means the class definition of the Target
class is not in the application class path or extension class path. In such
a scenario, if MyMainClass asks the custom class loaders to load the Target class, Target
will be loaded and Target.class will be defined independently by both
CustomClassLoader1 and CustomClassLoader2. This has serious implications in Java.
If some static initialization code is put in the Target class, and if we want
this code to be executed one and only once in a JVM, in our current setup the code
will be executed twice in the JVM: once each when the class is loaded separately
by both CustomClassLoaders. If the Target class is instantiated in both the
CustomClassLoaders to have the instances target1 and target2 as shown
in Figure 1, then target1 and target2 are not type-compatible. In other words, the JVM
cannot execute the code:




Target target3 = (Target) target2;



The above code will throw a ClassCastException. This is because the JVM sees these
two as separate, distinct class types, since they are defined by different ClassLoader
instances. The above explanation holds true
even if MyMainClass doesn't use two separate class loader classes like CustomClassLoader1
and CustomClassLoader2, and instead uses two separate instances of a single CustomClassLoader
class. This is demonstrated later in the article with code examples.






Figure 1. Multiple ClassLoaders loading the same Target class in the same JVM




A more
detailed explanation on the process of class loading, defining, and linking is
in Andreas Schaefer's article
"Inside Class Loaders."





















Why Do We Need our Own Class Loaders?




One of the reasons for a developer to write his or her own class loader is to control
the JVM's class loading behavior. A class in Java is identified using
its package name and class name. For classes that implement
java.io.Serializable, the serialVersionUID plays a major role
in versioning the class. This stream-unique identifier is a 64-bit hash of the
class name, interface class names, methods, and fields. Other than these, there
are no other straightforward mechanisms for versioning a class. Technically
speaking, if the above aspects match, the classes are of "same version."




But let us think of a scenario where we need to develop a generic Execution Engine, capable of executing any tasks implementing a particular interface. When the
tasks are submitted to the engine, first the engine needs to load the code for
the task. Suppose different clients submit different tasks (i.e., different code) to
the engine, and by chance, all of these tasks have the same class name and
package name. The question is whether the engine will load the different client
versions of the task differently for different client invocation contexts so that the clients will get the output they expect.
The phenomenon is demonstrated in the sample code download, located in the References section below. Two directories, samepath
and differentversions, contain separate examples to demonstrate the concept.




Figure 2 shows how the examples are arranged in three separate subfolders, called samepath,
differentversions, and differentversionspush:





Figure 2. Example folder structure arrangement




In samepath, we have version.Version classes kept in two subdirectories,
v1 and v2. Both classes have the same name and same package. The only difference
between the two classes is in the following lines:




public void fx(){
log("this = " + this + "; Version.fx(1).");
}



inside of v1, we have Version.fx(1) in the log statement, whereas
in v2, we have Version.fx(2). Put both these slightly different
versions of the classes in the same classpath, and run the Test class:




set CLASSPATH=.;%CURRENT_ROOT%\v1;%CURRENT_ROOT%\v2
%JAVA_HOME%\bin\java Test



This will give the console output shown in Figure 3. We can see that code
corresponding to Version.fx(1) is loaded, since the class loader
found that version of the code first in the classpath.





Figure 3. samepath test with version 1 first in the classpath




Repeat the run, with a slight change in the order of path elements in class path.




set CLASSPATH=.;%CURRENT_ROOT%\v2;%CURRENT_ROOT%\v1
%JAVA_HOME%\bin\java Test



The console output is now changed to that shown in Figure 4. Here, the code
corresponding to Version.fx(2) is loaded, since the class loader
found that version of the code first in the classpath.





Figure 4. samepath test with version 2 first in the classpath




From the above example it is obvious that the
class loader will try to load the class using the path element that is found
first. Also, if we delete the version.Version classes from v1
and v2, make a .jar (myextension.jar) out of version.Version, put it in the path corresponding to java.ext.dirs, and repeat the test, we see
that version.Version is no longer loaded by AppClassLoader
but by the extension class loader, as shown in Figure 5.





Figure 5. AppClassLoader and ExtClassLoader




Going forward with the examples, the folder differentversions contains an RMI execution
engine. Clients can supply any tasks that implement common.TaskIntf
to the execution engine. The subfolders client1 and client2 contain slightly
different versions of the class client.TaskImpl. The difference
between the two classes is in the following lines:




static{
log("client.TaskImpl.class.getClassLoader
(v1) : " + TaskImpl.class.getClassLoader());
}

public void execute(){
log("this = " + this + "; execute(1)");
}



Instead of the getClassLoader(v1) and execute(1) log statements
in execute() inside of client1, client2 has getClassLoader(v2) and
execute(2) log statements. Moreover, in the script to start
the Execution Engine RMI server, we have arbitrarily put the task implementation class
of client2 first in the classpath.




CLASSPATH=%CURRENT_ROOT%\common;%CURRENT_ROOT%\server;
%CURRENT_ROOT%\client2;%CURRENT_ROOT%\client1
%JAVA_HOME%\bin\java server.Server



The screenshots in Figures 6, 7, and 8 show what is happening under the hood.
Here, in the client VMs, separate client.TaskImpl classes are
loaded, instantiated, and sent to the Execution Engine Server VM for execution.
From the server console, it is apparent that client.TaskImpl code
is loaded only once in the server VM. This single "version" of the code is used
to regenerate many client.TaskImpl instances in the server VM,
and execute the task.





Figure 6. Execution Engine Server console




Figure 6 shows the Execution Engine Server console, which is loading and
executing code on behalf of two separate client requests, as shown in Figures 7
and Figure 8. The point to note here is that the code is loaded only once (as
is evident from the log statement inside of the static initialization block), but
the method is executed twice for each client invocation context.





Figure 7. Execution Engine Client 1 console




In Figure 7, the code for the TaskImpl class containing the log statement
client.TaskImpl.class.getClassLoader(v1) is loaded by the client VM,
and supplied to the Execution Engine Server. The client VM in Figure 8 loads
different code for the TaskImpl class containing the log statement
client.TaskImpl.class.getClassLoader(v2), and supplies it to the
Server VM.





Figure 8. Execution Engine Client 2 console




Here, in the client VMs, separate client.TaskImpl classes are
loaded, instantiated, and sent to the Execution Engine Server VM for execution.
A second look at the server console in Figure 6 reveals that the client.TaskImpl code
is loaded only once in the server VM. This single "version" of the code is used
to regenerate the client.TaskImpl instances in the server VM,
and execute the task. Client 1 should be unhappy since instead of his "version"
of the client.TaskImpl(v1), it is some other code that is executed
in the server against Client 1's invocation! How do we tackle such scenarios? The
answer is to implement custom class loaders.





















Custom Class Loaders




The solution to fine-control class loading is to implement custom class loaders.
Any custom class loader should have java.lang.ClassLoader as its
direct or distant super class. Moreover, in the constructor, we need to set the
parent class loader, too. Then, we have to override the findClass()
method. The differentversionspush folder contains a custom class loader called
FileSystemClassLoader. Its structure is shown in Figure 9:





Figure 9. Custom class loader relationship




Below are the main methods implemented in common.FileSystemClassLoader:




public byte[] findClassBytes(String className){

try{
String pathName = currentRoot +
File.separatorChar + className.
replace('.', File.separatorChar)
+ ".class";
FileInputStream inFile = new
FileInputStream(pathName);
byte[] classBytes = new
byte[inFile.available()];
inFile.read(classBytes);
return classBytes;
}
catch (java.io.IOException ioEx){
return null;
}
}

public Class findClass(String name)throws
ClassNotFoundException{

byte[] classBytes = findClassBytes(name);
if (classBytes==null){
throw new ClassNotFoundException();
}
else{
return defineClass(name, classBytes,
0, classBytes.length);
}
}

public Class findClass(String name, byte[]
classBytes)throws ClassNotFoundException{

if (classBytes==null){
throw new ClassNotFoundException(
"(classBytes==null)");
}
else{
return defineClass(name, classBytes,
0, classBytes.length);
}
}

public void execute(String codeName,
byte[] code){

Class klass = null;
try{
klass = findClass(codeName, code);
TaskIntf task = (TaskIntf)
klass.newInstance();
task.execute();
}
catch(Exception exception){
exception.printStackTrace();
}
}



This class is used by the client to convert the client.TaskImpl(v1)
to a byte[]. This byte[] is then send to the RMI
Server Execution Engine. In the server, the same class is used for defining
the class back from the code in the form of byte[]. The client-side
code is shown below:




public class Client{

public static void main (String[] args){

try{
byte[] code = getClassDefinition
("client.TaskImpl");
serverIntf.execute("client.TaskImpl",
code);
}
catch(RemoteException remoteException){
remoteException.printStackTrace();
}
}

private static byte[] getClassDefinition
(String codeName){
String userDir = System.getProperties().
getProperty("BytePath");
FileSystemClassLoader fscl1 = null;

try{
fscl1 = new FileSystemClassLoader
(userDir);
}
catch(FileNotFoundException
fileNotFoundException){
fileNotFoundException.printStackTrace();
}
return fscl1.findClassBytes(codeName);
}
}



Inside of the execution engine, the code received from the client is given
to the custom class loader. The custom class loader will define the class
back from the byte[], instantiate the class, and execute. The
notable point here is that, for each client request, we use separate
instances of the FileSystemClassLoader class to define the client-supplied
client.TaskImpl. Moreover, the client.TaskImpl is
not available in the class path of the server. This means that when we call
findClass() on the FileSystemClassLoader, the
findClass() method calls defineClass() internally, and the
client.TaskImpl class gets defined by that particular instance
of the class loader. So when a new instance of the FileSystemClassLoader
is used, the class is defined from the byte[] all over again. Thus,
for each client invocation, class client.TaskImpl is defined again and
again and we are able to execute "different versions" of the client.TaskImpl
code inside of the same Execution Engine JVM.




public void execute(String codeName, byte[] code)throws RemoteException{

FileSystemClassLoader fileSystemClassLoader = null;

try{
fileSystemClassLoader = new FileSystemClassLoader();
fileSystemClassLoader.execute(codeName, code);
}
catch(Exception exception){
throw new RemoteException(exception.getMessage());
}
}



Examples are in the differentversionspush folder. The server and client
side consoles are shown in Figures 10, 11, and 12:





Figure 10. Custom class loader execution engine




Figure 10 shows the custom class loader Execution Engine VM console. We can see the
client.TaskImpl code is loaded more than once. In fact, for each client
execution context, the class is newly loaded and instantiated.





Figure 11. Custom class loader engine, Client 1




In Figure 11, the code for the TaskImpl class containing the log statement
client.TaskImpl.class.getClassLoader(v1) is loaded by the client VM,
and pushed to the Execution Engine Server VM. The client VM in Figure 12 loads a
different code for the TaskImpl class containing the log statement
client.TaskImpl.class.getClassLoader(v2), and pushes to the
Server VM.





Figure 12. Custom class loader engine, Client 2




This code example shows how we can leverage separate instances of class loaders
to have side-by-side execution of "different versions" of code in the same VM.



Class Loaders In J2EE




The class loaders in some J2EE servers tend to drop and reload classes at different
intervals. This will occur in some implementations and may not on others.
Similarly, a web server may decide to remove a previously loaded servlet instance,
perhaps because it is explicitly asked to do so by the server administrator, or
because the servlet has been idle for a long time. When a request is first made
for a JSP (assuming it hasn't been precompiled), the JSP engine will translate the
JSP into its page implementation class, which takes the form of a standard Java servlet.
Once the page's implementation servlet has been created, it will be compiled into a
class file by the JSP engine and will be ready for use. Each time a container receives
a request, it first checks to see if the JSP file has changed since it was last translated.
If it has, it's retranslated so that the response is always generated by the most
up-to-date implementation of the JSP file. Enterprise application
deployment units in the form of .ear, .war, .rar, etc. will also needs to be loaded
and reloaded at will or as per configured policies. For all of these scenarios, loading,
unloading and reloading is possible only if we have control over the application
server's JVM's class-loading policy. This is attained by an extended class loader,
which can execute the code defined in its boundary. Brett Peterson has given an explanation
of class loading schemas in a J2EE application server context in his article
"
Understanding J2EE Application Server Class Loading Architectures
" at
TheServerSide.com.



Summary



The article talked about how classes loaded into a Java virtual machine are
uniquely identified and what limitations exist when we try to load different
byte codes for classes with the same names and packages. Since there is no explicit class
versioning mechanism, if we want to load classes at our own will, we have to use
custom class loaders with extended capabilities. Many J2EE application servers have a
"hot deployment" capability, where we can reload an application with a new version
of class definition, without bringing the server VM down. Such application servers
make use of custom class loaders. Even if we don't use an application server, we can
create and use custom class loaders to finely control class loading mechanisms in our Java
applications. Ted Neward's book
Server-Based Java Programming
throws light onto the ins and outs of Java class loading, and it teaches those concepts
of Java that underlie the J2EE APIs and the best ways to use them.



References





Binildas Christudas
is a senior technical architect at Software Engineering Technology
Labs (SET Labs) of Infosys.




Posted by 아름프로


Object-Relational Mapping with SQLMaps


by Sunil Patil

02/02/2005



Introduction



Nowadays a lot of work is going on in the object-relational (OR) mapping field, with Hibernate having seemingly taken the lead over other frameworks. But there is one problem with object-relational mapping tools: most database administrators seem not to be very comfortable with the queries generated by these OR mapping tools. Sadly, these DBAs don't understand how brilliant your framework is in automatically generating queries for you, and how flexible it makes your application. They feel that with the database being your application's primary bottleneck, you should have complete control over SQL queries, so that they will be able to analyze and tune them for performance.



But the problem is that if you don't use an OR mapping tool, then you have to spend a lot of resources in writing and maintain low-level JDBC code. Every JDBC application will have repetitive code for:



  1. Connection and transaction management.

  2. Setting Java objects as query parameters.

  3. Converting SQL ResultSets into Java objects.

  4. Creating query strings.



iBatis' SQLMaps framework helps you to significantly reduce the amount of Java code that you normally need to access a relational database. It takes care of three of the above concerns, in that it allows an easy mapping of a JavaBean object to PreparedStatement parameters and ResultSet values. The philosophy behind SQLMaps is simple: provide a simple framework to provide 80 percent of JDBC's functionality.







Related Reading



SQL in a Nutshell, 2nd Edition

SQL in a Nutshell, 2nd Edition


A Desktop Quick Reference


By Kevin E.쟇line






Index

Sample Chapter




Read Online--Safari
Search this book on Safari:





 



Code Fragments only






This article is a step-by-step tutorial about how to use the SQLMaps framework. We will start by creating a sample Struts application and configure it to use SQLMaps. Then we will cover how to perform basic database operations like SELECT, INSERT, UPDATE, etc. Next, we will cover what options SQLMaps provides for connection and transaction management. And at the end, we will try to use some advanced features of SQLMaps like caching and paging.



The Basic Idea Behind SQLMaps



To use the SQLMaps framework, you create a XML file that lists all of the SQL queries that you wish to execute through your application. For each SQL query, you specify with which Java class the query will exchange parameters and ResultSets.



Inside of your Java code, when you want to execute a particular query, you will create an object to pass query parameters and necessary conditions, and then pass this object and name of the query to be executed to SQLMaps. Once the query is executed, SQLMaps will create an instance of the class you have specified to receive query results, and populate it with values from the ResultSet returned by the database.



A Simple Application Using SQLMaps (Hello World)



We will start by creating a sample Struts application to demonstrate what needs to change in your application to use SQLMaps. The code for this sample may be found in the Resources section below. In this sample, application we will create a JSP page that asks the user for a contactId. Once it is submitted, we use it to search for a contact in the CONTACT table, which is displayed to the user using another JSP. Follow these step-by-step instructions:




  1. Copy ibatis-sqlmap-2.jar and ibatis-common-2.jar to your web-inf/lib directory.



  2. Create a SqlMapConfig.xml file in your Java source folder, like this:


    <sqlMapConfig>
    <settings useStatementNamespaces="false" />
    <transactionManager type="JDBC">
    <dataSource type="SIMPLE" >
    <property name="JDBC.Driver"
    value="COM.ibm.db2.jdbc.app.DB2Driver"/>
    <property name="JDBC.ConnectionURL"
    value="jdbc:db2:SAMPLE"/>
    <property name="JDBC.Username"
    value="db2admin"/>
    <property name="JDBC.Password"
    value="admin2db"/>
    </dataSource>
    </transactionManager>
    <sqlMap resource="Contact.xml"/>
    </sqlMapConfig>


    SqlMapConfig.xml is the deployment descriptor for SQLMaps and contains the following elements:



    • <sqlMapConfig> is the root element of the file. The <settings> element is used for defining application-level settings; for instance, the useStatementNamespaces attribute is used to define whether you want to use the fully qualified name of the prepared statement. It can have a few more attributes for controlling caching and lazy initialization; please look into the documentation for further details.


    • The <transactionManager> element is used to define what kind of transaction management you want to use in your application. In our sample application, we want to use the Connection object's commit and rollback methods to manage transactions, so we are using JDBC as the transaction manager. It contains <dataSource> as a child element, which defines the type of Connection management you want to use. In our sample application, we want to use SQLMaps' own implementation of connection pooling, so we are using a datasource of type SIMPLE. SQLMaps requires information like the JDBC driver name, URL, and password in order to create the connection pool, so we are using <property> elements for passing that information. We will cover various available transaction and connection management options in more detail later.


    • The <sqlMap> element is used to declare sqlmap config files. These files, discussed earlier, list the SQL queries that you wish to execute.






  3. Create a JavaBean-type class, Contact.java, that has firstName, lastName, and contactId properties and corresponding getter and setter methods. This class will be used for passing query parameters and reading values from the ResultSet.


    public class Contact implements Serializable{
    private String firstName;
    private String lastName;
    private int contactId;
    //Getter setter methods for firstName,
    //lastName and contactId property
    }




  4. Create a Contact.xml file like this, where we will list all Contact-table-related SQL queries that we want to execute:

    <sqlMap namespace="Contact"">
    <typeAlias alias="contact"
    type="com.sample.contact.Contact"/">
    <select id="getContact"
    parameterClass="int" resultClass="contact"">
    select CONTACTID as contactId,
    FIRSTNAME as firstName,
    LASTNAME as lastName from
    ADMINISTRATOR.CONTACT where CONTACTID = #id#
    </select>
    </sqlMap>


    The tags used in the file are as follows:

    • <sqlMap> is the root element of the file. Your application will normally have more than one table, and since you will want to separate queries related to different tables into different namespaces, the <namespace> element is used to specify the namespace in which all of the queries in this file should be placed.

    • <typeAlias> is used to declare a short name for the fully qualified name of the Contact class. After this declaration, the short name can be used instead of the fully qualified name.

    • The <select> element should be used for declaring a SELECT query in the SQLMaps framework. You can specify the query to be executed as the value of the element. The id attribute is used to specify the name that will be used to instruct SQLMaps to execute this particular query. parameterClass is used to specify which class is used for passing query parameters and resultClass provides the name of the class that should be used to return values from the ResultSet.





  5. Inside of the execute() method of our Action class, we build an instance of SqlMapClient, which is used for interacting with SQLMaps. We have to pass the SqlMapConfig.xml file to SqlMapClientBuilder, which is used to read configuration settings.



    DynaActionForm contactForm =
    (DynaActionForm)form;
    Reader configReader =
    Resources.getResourceAsReader("SqlMapConfig.xml");
    SqlMapClient sqlMap =
    SqlMapClientBuilder.buildSqlMapClient(configReader);
    Contact contact = (Contact)
    sqlMap.queryForObject("getContact",
    contactForm.get("contactId"));
    request.setAttribute("contactDetail", contact);
    return mapping.findForward("success");



    SQLMaps' queryForObject method should be used when you want to execute a SELECT query. In Contact.xml, we have specified int as parameterClass class, so we are passing contactId as an integer, along with the name of the query (i.e, getContact). SQLMaps will then return an object of the Contact class.





















Basic Database Operation


Now we will turn our focus on how to perform some basic database operations using SQMLaps.


  1. Insert


    We will start with how to execute an INSERT query.



    <insert id="insertContact" parameterClass="contact">
    INSERT INTO ADMINISTRATOR.CONTACT( CONTACTID,FIRSTNAME,LASTNAME)
    VALUES(#contactId#,#firstName#,#lastName#);
    </insert>

    The <insert> element is used to declare an INSERT SQL query. It will have a parameterClass attribute to indicate which JavaBean class should be used to pass request parameters. We want to use the value of the contactId attribute while inserting new records, so we have to use a #contactId# in our SQL query.



    public void contactInsert() throws SQLException, IOException {
    sqlMap.startTransaction();
    try {
    sqlMap.startTransaction();
    Contact contact = new Contact();
    contact.setContactId(3);
    contact.setFirstName("John");
    contact.setLastName("Doe");
    sqlMap.insert("insertContact",contact);
    sqlMap.commitTransaction();
    } finally{
    sqlMap.endTransaction();
    }
    }


    Inside of our Java code, we create a Contact object, populate its values, and then call sqlMap.insert(), passing the name of the query that we want to execute and the Contact. This method will insert the new contact and return the primary key of the newly inserted contact.



    By default, SQLMaps treats every DML method as a single unit of work. But you can use the startTransaction, commitTransaction, and endTransaction methods for transaction boundary demarcation. You can start a transaction by calling the startTransaction() method, which will also retrieve a connection from connection pool. This connection object will be used for executing queries in this transaction. If all of the queries in the transaction are executed successfully, you should call commitTransaction() to commit your changes. Irrespective of whether your transaction was successful or not, you should call the endTransaction method in the end, which will return the connection object back to the pool, and is thus necessary for proper cleanup.




  2. Update


    The <update> element is used to declare an update query. Its parameterClass element is used to declare the name of the JavaBean class used to pass query parameters. Inside of your Java code you can instruct SQLMaps to fire an update query with sqlMap.update("updateContact",contact). This method will return number of affected rows.



    <update id="updateContact" parameterClass="contact">
    update ADMINISTRATOR.CONTACT SET
    FIRSTNAME=#firstName# ,
    LASTNAME=#lastName#
    where contactid=#contactId#
    </update>



  3. Delete


    The <delete> element is used to declare a DELETE query. Inside of your Java class, you execute the statement like this: sqlMap.delete("deleteContact",new Integer(contactId)). The method returns the number of affected rows.




    <delete id="deleteContact" parameterClass="int">
    DELETE FROM ADMINISTRATOR.CONTACT WHERE CONTACTID=#contactId#
    </delete>



  4. Procedure


    Stored procedures are supported via theprocedureelement. Most of the stored procedures take some parameters, which can be of the types IN, INOUT, or OUT. So you create <parameterMap> elements and list the parameters that you want to pass to the stored procedure. The parameterMap object is changed only if the parameter type is either OUT or INOUT.



    <parameterMap id="swapParameters" class="map" >
    <parameter property="contactId" jdbcType="INTEGER"
    javaType="java.lang.Integer" mode="IN"/>
    <parameter property="firstName" jdbcType="VARCHAR"
    javaType="java.lang.String" mode="IN"/>
    <parameter property="lastName" jdbcType="VARCHAR"
    javaType="java.lang.String" mode="IN"/>
    </parameterMap>

    <procedure id="swapContactName" parameterMap="swapParameters" >
    {call swap_contact_name (?, ?,?)}
    </procedure>


    Inside of your Java code first, create a HashMap of parameters that you want to pass to the procedure, and then pass it to sqlMap along with name of the query that you want to execute.



    HashMap paramMap = new HashMap();
    paramMap.put("contactId", new Integer(1));
    paramMap.put("firstName", "Sunil");
    paramMap.put("lastName", "Patil");
    sqlMap.queryForObject("swapCustomerName", paramMap);






Connection and Transaction Management


The SQLMaps framework takes care of connection management for you. By default, it ships with three different implementations of connection management. You can specify which implementation you want to use by the value of the type attribute of the <dataSource> element.



  • SIMPLE: Use SQLMaps' own connection pool implementation. While using this implementation, you have to pass connection information (such as a JDBC driver name, username, and password) to SQLMaps.

  • DBCP: Use Apache's DBCP connection pooling algorithm.

  • JNDI: Use a container supplied datasource. If you want to use this method, then first configure the JDBC datasource in the container (in some container-specific way), and then specify the JNDI name of datasource like this:

    <transactionManager type="JDBC" >
    <dataSource type="JNDI">
    <property name="DataSource"
    value="java:comp/env/jdbc/testDB"/>
    </dataSource>
    </transactionManager>

    The value of DataSource property should point to the JNDI name of the datasource you want to use.


SQLMaps uses DataSourceFactory implementations for connection management, so you can create your own class implementing this interface and instruct SQLMaps to use it, if you like.


For transaction management, the value of the <transactionManager> element in SqlMapConfig.xml indicates which class should be used for transaction management:



  • JDBC: In this case, transactions are controlled by calling begin() and commit() methods on the underlying Connection object. This option should be used in applications that run in an outside container and interact with a single database.

  • JTA: In this case, a global JTA transaction is used. SQLMaps activities can be included as a part of a wider-scope transaction that possibly involves other databases and transaction resources.

  • External: In this case, you have to manage the transaction on your own. A transaction will not be committed or rolled back as part of the framework lifecycle. This setting is useful for non-transactional (read-only) databases.



Advanced Features



Now we can spend some time talking about advanced features of the SQLMaps framework. The scope of this article does not allow me to cover all of them, so I will be talking about few that i think are commonly useful; you can look into the SQLMaps documentation (PDF) to find out what features are supported.



Caching


The <cacheModel> element is used to describe a cache for use with a query-mapped statement.


12345678901234567890123456789012345678901234567890
<cacheModel id="contactCache" type="LRU">
<flushOnExecute statement="insertContact"/>
<flushOnExecute statement="updateContact"/>
<flushOnExecute statement="deleteContact"/>
<property name="size" value="1000"/>
</cacheModel>

<select id="getCachedContact" parameterClass="int"
resultClass="contact" cacheModel="contactCache">
select FIRSTNAME as firstName,LASTNAME as lastName
from CONTACT where CONTACTID = #contactId#
</select>

Each query can have a different cache model, or more than one query can share the same cache. SQLMaps supports a pluggable framework for supporting different types of caches. Which implementation should be used is specified in the type attribute of the cacheModel element.



  • LRU: Removes the least recently used element from the cache when the cache is full.

  • FIFO: Removes the oldest object from the cache once the cache is full.

  • MEMORY: Uses Java reference types such as SOFT, WEAK, and STRONG to manage cache behavior. It allows the garbage collector to determine what stays in memory and what gets deleted. This implementation should be used in applications where memory is scarce.


  • OSCACHE: A plugin for the OSCache2.0 caching engine. You need oscache.properties in your root folder to configure OSCache. This implementation can be used in distributed applications.


The cacheModel attribute of the <select> element defines which caching model should be used for caching its results. You can disable caching globally for SqlMapClient by setting the value of the cacheModelsEnabled attribute of <settings> to false.



How to Enable Logging



SQLMaps provides logging information through the use of the Jakarta Commons logging framework . Follow these steps to enable logging:



  1. Add log4j.jar to your application classpath. For a web application, you will have to copy it to WEB-INF/lib.

  2. Create a log4j.properties file like the following in your classpath root:

    log4j.rootLogger=ERROR, stdout
    # SqlMap logging configuration...
    log4j.logger.com.ibatis=DEBUG
    log4j.logger.com.ibatis.common.jdbc.SimpleDataSource=DEBUG
    log4j.logger.com.ibatis.common.jdbc.ScriptRunner=DEBUG
    log4j.logger.com.ibatis.sqlmap.engine.impl.SqlMapClientDelegate=DEBUG
    log4j.logger.java.sql.Connection=DEBUG
    log4j.logger.java.sql.Statement=DEBUG
    log4j.logger.java.sql.PreparedStatement=DEBUG
    log4j.logger.java.sql.ResultSet=DEBUG
    # Console output...
    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
    log4j.appender.stdout.layout.ConversionPattern=%5p [%t] - %m%n



Paging


Assume that our CONTACT table has 1000 records and we want to display it in a spreadsheet to the user, but only 50 records at a time. In this situation, we don't want to query the CONTACT table to get a ResultSet containing 1000 contacts; we want to query the CONTACT table and get 50 records at time. SQLMaps provides the PaginatedList interface for handling this type of situation. It allows you to deal with a subset of data through which the user can navigate forwards and backwards.




PaginatedList list = sqlMap.queryForPaginatedList("getContacts", null, 2);
while (true) {
Iterator listIterator = list.iterator();
while (listIterator.hasNext()) {
System.out.println(
((Contact)listIterator.next()).getContactId());
}
if( list.isNextPageAvailable())
list.nextPage();
else
break;
}


Conclusion


SQLMaps is a very good option if your application has a small number of fixed queries. It is very easy to use and allows the developer to take advantage of his or her knowledge of SQL. It also helps you achieve separation of roles, since a developer can list out queries that he or she needs and then start working on his or her Java code, giving the SQLMaps XML file to a DBA who will try to analyze and tune SQL queries.


Advantages



  1. Does not depend on what Dialects are supported by an OR mapping framework.

  2. Very easy to use; supports many advanced features.

  3. Doesn't require learning a new query language like EJBQL. Allows you to take advantage of your existing knowledge of SQL.



Disadvantages



  1. Applications will not be portable if you use advanced features.


But if your application is going to work on more than multiple databases, or if it has a large number of queries, then you may want to look at several OR mapping frameworks before making a final decision.


Resources





Sunil Patil
has worked on J2EE technologies for four years. He is currently working with IBM Software Labs.






Return to ONJava.com.

Posted by 아름프로

About PostgreSQL

2005. 2. 11. 11:06

About


PostgreSQL is an object-relational database management system (ORDBMS) based on POSTGRES, Version 4.2, developed at the University of California at Berkeley Computer Science Department. POSTGRES pioneered many concepts that only became available in some commercial database systems much later.


PostgreSQL is an open-source descendant of this original Berkeley code. It supports SQL92 and SQL99 and offers many modern features:



  • complex queries

  • foreign keys

  • triggers

  • views

  • transactional integrity

  • multiversion concurrency control


Additionally, PostgreSQL can be extended by the user in many ways, for example by adding new



  • data types

  • functions

  • operators

  • aggregate functions

  • index methods

  • procedural languages


And because of the liberal license, PostgreSQL can be used, modified, and distributed by everyone free of charge for any purpose, be it private, commercial, or academic.

Posted by 아름프로

J2EE 1.4 Glossary

2005. 2. 11. 08:53
Glossary

J2EE v1.4 Glossary

















 













A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


abstract schema

The part of an entity bean's deployment descriptor that defines the bean's persistent fields and relationships.

abstract schema name

A logical name that is referenced in EJB QL queries.

access control

The methods by which interactions with resources are limited to collections of users or programs for the purpose of enforcing integrity, confidentiality, or availability constraints.

ACID

The acronym for the four properties guaranteed by transactions: atomicity, consistency, isolation, and durability.

activation

The process of transferring an enterprise bean from secondary storage to memory. (See passivation.)

anonymous access

Accessing a resource without authentication.

applet

A J2EE component that typically executes in a Web browser but can execute in a variety of other applications or devices that support the applet programming model.

applet container

A container that includes support for the applet programming model.

application assembler

A person who combines J2EE components and modules into deployable application units.

application client

A first-tier J2EE client component that executes in its own Java virtual machine. Application clients have access to some J2EE platform APIs.

application client container

A container that supports application client components.

application client module

A software unit that consists of one or more classes and an application client deployment descriptor.

application component provider

A vendor that provides the Java classes that implement components' methods, JSP page definitions, and any required deployment descriptors.

application configuration resource file

An XML file used to configure resources for a JavaServer Faces application, to define navigation rules for the application, and to register converters, validators, listeners, renderers, and components with the application.

archiving

The process of saving the state of an object and restoring it.

asant

A Java-based build tool that can be extended using Java classes. The configuration files are XML-based, calling out a target tree where various tasks get executed.

attribute

A qualifier on an XML tag that provides additional information.

authentication

The process that verifies the identity of a user, device, or other entity in a computer system, usually as a prerequisite to allowing access to resources in a system. The Java servlet specification requires three types of authentication-basic, form-based, and mutual-and supports digest authentication.

authorization

The process by which access to a method or resource is determined. Authorization depends on the determination of whether the principal associated with a request through authentication is in a given security role. A security role is a logical grouping of users defined by the person who assembles the application. A deployer maps security roles to security identities. Security identities may be principals or groups in the operational environment.

authorization constraint

An authorization rule that determines who is permitted to access a Web resource collection.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


B2B

Business-to-business.

backing bean

A JavaBeans component that corresponds to a JSP page that includes JavaServer Faces components. The backing bean defines properties for the components on the page and methods that perform processing for the component. This processing includes event handling, validation, and processing associated with navigation.

basic authentication

An authentication mechanism in which a Web server authenticates an entity via a user name and password obtained using the Web application's built-in authentication mechanism.

bean-managed persistence

The mechanism whereby data transfer between an entity bean's variables and a resource manager is managed by the entity bean.

bean-managed transaction

A transaction whose boundaries are defined by an enterprise bean.

binary entity

See unparsed entity.

binding (XML)

Generating the code needed to process a well-defined portion of XML data.

binding (JavaServer Faces technology)

Wiring UI components to back-end data sources such as backing bean properties.

build file

The XML file that contains one or more asant targets. A target is a set of tasks you want to be executed. When starting asant, you can select which targets you want to have executed. When no target is given, the project's default target is executed.

business logic

The code that implements the functionality of an application. In the Enterprise JavaBeans architecture, this logic is implemented by the methods of an enterprise bean.

business method

A method of an enterprise bean that implements the business logic or rules of an application.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


callback methods

Component methods called by the container to notify the component of important events in its life cycle.

caller

Same as caller principal.

caller principal

The principal that identifies the invoker of the enterprise bean method.

cascade delete

A deletion that triggers another deletion. A cascade delete can be specified for an entity bean that has container-managed persistence.

CDATA

A predefined XML tag for character data that means "don't interpret these characters," as opposed to parsed character data (PCDATA), in which the normal rules of XML syntax apply. CDATA sections are typically used to show examples of XML syntax.

certificate authority

A trusted organization that issues public key certificates and provides identification to the bearer.

client-certificate authentication

An authentication mechanism that uses HTTP over SSL, in which the server and, optionally, the client authenticate each other with a public key certificate that conforms to a standard that is defined by X.509 Public Key Infrastructure.

comment

In an XML document, text that is ignored unless the parser is specifically told to recognize it.

commit

The point in a transaction when all updates to any resources involved in the transaction are made permanent.

component

See J2EE component.

component (JavaServer Faces technology)

See JavaServer Faces UI component.

component contract

The contract between a J2EE component and its container. The contract includes life-cycle management of the component, a context interface that the instance uses to obtain various information and services from its container, and a list of services that every container must provide for its components.

component-managed sign-on

A mechanism whereby security information needed for signing on to a resource is provided by an application component.

connection

See resource manager connection.

connection factory

See resource manager connection factory.

connector

A standard extension mechanism for containers that provides connectivity to enterprise information systems. A connector is specific to an enterprise information system and consists of a resource adapter and application development tools for enterprise information system connectivity. The resource adapter is plugged in to a container through its support for system-level contracts defined in the Connector architecture.

Connector architecture

An architecture for integration of J2EE products with enterprise information systems. There are two parts to this architecture: a resource adapter provided by an enterprise information system vendor and the J2EE product that allows this resource adapter to plug in. This architecture defines a set of contracts that a resource adapter must support to plug in to a J2EE product-for example, transactions, security, and resource management.

container

An entity that provides life-cycle management, security, deployment, and runtime services to J2EE components. Each type of container (EJB, Web, JSP, servlet, applet, and application client) also provides component-specific services.

container-managed persistence

The mechanism whereby data transfer between an entity bean's variables and a resource manager is managed by the entity bean's container.

container-managed sign-on

The mechanism whereby security information needed for signing on to a resource is supplied by the container.

container-managed transaction

A transaction whose boundaries are defined by an EJB container. An entity bean must use container-managed transactions.

content

In an XML document, the part that occurs after the prolog, including the root element and everything it contains.

context attribute

An object bound into the context associated with a servlet.

context root

A name that gets mapped to the document root of a Web application.

conversational state

The field values of a session bean plus the transitive closure of the objects reachable from the bean's fields. The transitive closure of a bean is defined in terms of the serialization protocol for the Java programming language, that is, the fields that would be stored by serializing the bean instance.

CORBA

Common Object Request Broker Architecture. A language-independent distributed object model specified by the OMG.

create method

A method defined in the home interface and invoked by a client to create an enterprise bean.

credentials

The information describing the security attributes of a principal.

CSS

Cascading style sheet. A stylesheet used with HTML and XML documents to add a style to all elements marked with a particular tag, for the direction of browsers or other presentation mechanisms.

CTS

Compatibility test suite. A suite of compatibility tests for verifying that a J2EE product complies with the J2EE platform specification.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


data

The contents of an element in an XML stream, generally used when the element does not contain any subelements. When it does, the term content is generally used. When the only text in an XML structure is contained in simple elements and when elements that have subelements have little or no data mixed in, then that structure is often thought of as XML data, as opposed to an XML document.

DDP

Document-driven programming. The use of XML to define applications.

declaration

The very first thing in an XML document, which declares it as XML. The minimal declaration is <?xml version="1.0"?>. The declaration is part of the document prolog.

declarative security

Mechanisms used in an application that are expressed in a declarative syntax in a deployment descriptor.

delegation

An act whereby one principal authorizes another principal to use its identity or privileges with some restrictions.

deployer

A person who installs J2EE modules and applications into an operational environment.

deployment

The process whereby software is installed into an operational environment.

deployment descriptor

An XML file provided with each module and J2EE application that describes how they should be deployed. The deployment descriptor directs a deployment tool to deploy a module or application with specific container options and describes specific configuration requirements that a deployer must resolve.

destination

A JMS administered object that encapsulates the identity of a JMS queue or topic. See point-to-point messaging system, publish/subscribe messaging system.

digest authentication

An authentication mechanism in which a Web application authenticates itself to a Web server by sending the server a message digest along with its HTTP request message. The digest is computed by employing a one-way hash algorithm to a concatenation of the HTTP request message and the client's password. The digest is typically much smaller than the HTTP request and doesn't contain the password.

distributed application

An application made up of distinct components running in separate runtime environments, usually on different platforms connected via a network. Typical distributed applications are two-tier (client-server), three-tier (client-middleware-server), and multitier (client-multiple middleware-multiple servers).

document

In general, an XML structure in which one or more elements contains text intermixed with subelements. See also data.

Document Object Model

An API for accessing and manipulating XML documents as tree structures. DOM provides platform-neutral, language-neutral interfaces that enables programs and scripts to dynamically access and modify content and structure in XML documents.

document root

The top-level directory of a WAR. The document root is where JSP pages, client-side classes and archives, and static Web resources are stored.

DOM

See Document Object Model.

DTD

Document type definition. An optional part of the XML document prolog, as specified by the XML standard. The DTD specifies constraints on the valid tags and tag sequences that can be in the document. The DTD has a number of shortcomings, however, and this has led to various schema proposals. For example, the DTD entry <!ELEMENT username (#PCDATA)> says that the XML element called username contains parsed character data-that is, text alone, with no other structural elements under it. The DTD includes both the local subset, defined in the current file, and the external subset, which consists of the definitions contained in external DTD files that are referenced in the local subset using a parameter entity.

durable subscription

In a JMS publish/subscribe messaging system, a subscription that continues to exist whether or not there is a current active subscriber object. If there is no active subscriber, the JMS provider retains the subscription's messages until they are received by the subscription or until they expire.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


EAR file

Enterprise Archive file. A JAR archive that contains a J2EE application.

ebXML

Electronic Business XML. A group of specifications designed to enable enterprises to conduct business through the exchange of XML-based messages. It is sponsored by OASIS and the United Nations Centre for the Facilitation of Procedures and Practices in Administration, Commerce and Transport (U.N./CEFACT).

EJB

See Enterprise JavaBeans.

EJB container

A container that implements the EJB component contract of the J2EE architecture. This contract specifies a runtime environment for enterprise beans that includes security, concurrency, life-cycle management, transactions, deployment, naming, and other services. An EJB container is provided by an EJB or J2EE server.

EJB container provider

A vendor that supplies an EJB container.

EJB context

An object that allows an enterprise bean to invoke services provided by the container and to obtain the information about the caller of a client-invoked method.

EJB home object

An object that provides the life-cycle operations (create, remove, find) for an enterprise bean. The class for the EJB home object is generated by the container's deployment tools. The EJB home object implements the enterprise bean's home interface. The client references an EJB home object to perform life-cycle operations on an EJB object. The client uses JNDI to locate an EJB home object.

EJB JAR file

A JAR archive that contains an EJB module.

EJB module

A deployable unit that consists of one or more enterprise beans and an EJB deployment descriptor.

EJB object

An object whose class implements the enterprise bean's remote interface. A client never references an enterprise bean instance directly; a client always references an EJB object. The class of an EJB object is generated by a container's deployment tools.

EJB server

Software that provides services to an EJB container. For example, an EJB container typically relies on a transaction manager that is part of the EJB server to perform the two-phase commit across all the participating resource managers. The J2EE architecture assumes that an EJB container is hosted by an EJB server from the same vendor, so it does not specify the contract between these two entities. An EJB server can host one or more EJB containers.

EJB server provider

A vendor that supplies an EJB server.

element

A unit of XML data, delimited by tags. An XML element can enclose other elements.

empty tag

A tag that does not enclose any content.

enterprise bean

A J2EE component that implements a business task or business entity and is hosted by an EJB container; either an entity bean, a session bean, or a message-driven bean.

enterprise bean provider

An application developer who produces enterprise bean classes, remote and home interfaces, and deployment descriptor files, and packages them in an EJB JAR file.

enterprise information system

The applications that constitute an enterprise's existing system for handling companywide information. These applications provide an information infrastructure for an enterprise. An enterprise information system offers a well-defined set of services to its clients. These services are exposed to clients as local or remote interfaces or both. Examples of enterprise information systems include enterprise resource planning systems, mainframe transaction processing systems, and legacy database systems.

enterprise information system resource

An entity that provides enterprise information system-specific functionality to its clients. Examples are a record or set of records in a database system, a business object in an enterprise resource planning system, and a transaction program in a transaction processing system.

Enterprise JavaBeans (EJB)

A component architecture for the development and deployment of object-oriented, distributed, enterprise-level applications. Applications written using the Enterprise JavaBeans architecture are scalable, transactional, and secure.

Enterprise JavaBeans Query Language (EJB QL)

Defines the queries for the finder and select methods of an entity bean having container-managed persistence. A subset of SQL92, EJB QL has extensions that allow navigation over the relationships defined in an entity bean's abstract schema.

entity

A distinct, individual item that can be included in an XML document by referencing it. Such an entity reference can name an entity as small as a character (for example, &lt;, which references the less-than symbol or left angle bracket, <). An entity reference can also reference an entire document, an external entity, or a collection of DTD definitions.

entity bean

An enterprise bean that represents persistent data maintained in a database. An entity bean can manage its own persistence or can delegate this function to its container. An entity bean is identified by a primary key. If the container in which an entity bean is hosted crashes, the entity bean, its primary key, and any remote references survive the crash.

entity reference

A reference to an entity that is substituted for the reference when the XML document is parsed. It can reference a predefined entity such as &lt; or reference one that is defined in the DTD. In the XML data, the reference could be to an entity that is defined in the local subset of the DTD or to an external XML file (an external entity). The DTD can also carve out a segment of DTD specifications and give it a name so that it can be reused (included) at multiple points in the DTD by defining a parameter entity.

error

A SAX parsing error is generally a validation error; in other words, it occurs when an XML document is not valid, although it can also occur if the declaration specifies an XML version that the parser cannot handle. See also fatal error, warning.

Extensible Markup Language

See XML.

external entity

An entity that exists as an external XML file, which is included in the XML document using an entity reference.

external subset

That part of a DTD that is defined by references to external DTD files.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


fatal error

A fatal error occurs in the SAX parser when a document is not well formed or otherwise cannot be processed. See also error, warning.

filter

An object that can transform the header or content (or both) of a request or response. Filters differ from Web components in that they usually do not themselves create responses but rather modify or adapt the requests for a resource, and modify or adapt responses from a resource. A filter should not have any dependencies on a Web resource for which it is acting as a filter so that it can be composable with more than one type of Web resource.

filter chain

A concatenation of XSLT transformations in which the output of one transformation becomes the input of the next.

finder method

A method defined in the home interface and invoked by a client to locate an entity bean.

form-based authentication

An authentication mechanism in which a Web container provides an application-specific form for logging in. This form of authentication uses Base64 encoding and can expose user names and passwords unless all connections are over SSL.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


general entity

An entity that is referenced as part of an XML document's content, as distinct from a parameter entity, which is referenced in the DTD. A general entity can be a parsed entity or an unparsed entity.

group

An authenticated set of users classified by common traits such as job title or customer profile. Groups are also associated with a set of roles, and every user that is a member of a group inherits all the roles assigned to that group.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


handle

An object that identifies an enterprise bean. A client can serialize the handle and then later deserialize it to obtain a reference to the enterprise bean.

home handle

An object that can be used to obtain a reference to the home interface. A home handle can be serialized and written to stable storage and deserialized to obtain the reference.

home interface

One of two interfaces for an enterprise bean. The home interface defines zero or more methods for managing an enterprise bean. The home interface of a session bean defines create and remove methods, whereas the home interface of an entity bean defines create, finder, and remove methods.

HTML

Hypertext Markup Language. A markup language for hypertext documents on the Internet. HTML enables the embedding of images, sounds, video streams, form fields, references to other objects with URLs, and basic text formatting.

HTTP

Hypertext Transfer Protocol. The Internet protocol used to retrieve hypertext objects from remote hosts. HTTP messages consist of requests from client to server and responses from server to client.

HTTPS

HTTP layered over the SSL protocol.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


IDL

Interface Definition Language. A language used to define interfaces to remote CORBA objects. The interfaces are independent of operating systems and programming languages.

IIOP

Internet Inter-ORB Protocol. A protocol used for communication between CORBA object request brokers.

impersonation

An act whereby one entity assumes the identity and privileges of another entity without restrictions and without any indication visible to the recipients of the impersonator's calls that delegation has taken place. Impersonation is a case of simple delegation.

initialization parameter

A parameter that initializes the context associated with a servlet.

ISO 3166

The international standard for country codes maintained by the International Organization for Standardization (ISO).

ISV

Independent software vendor.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


J2EE

See Java 2 Platform, Enterprise Edition.

J2EE application

Any deployable unit of J2EE functionality. This can be a single J2EE module or a group of modules packaged into an EAR file along with a J2EE application deployment descriptor. J2EE applications are typically engineered to be distributed across multiple computing tiers.

J2EE component

A self-contained functional software unit supported by a container and configurable at deployment time. The J2EE specification defines the following J2EE components:


  • Application clients and applets are components that run on the client.



  • Java servlet and JavaServer Pages (JSP) technology components are Web components that run on the server.



  • Enterprise JavaBeans (EJB) components (enterprise beans) are business components that run on the server.



    J2EE components are written in the Java programming language and are compiled in the same way as any program in the language. The difference between J2EE components and "standard" Java classes is that J2EE components are assembled into a J2EE application, verified to be well formed and in compliance with the J2EE specification, and deployed to production, where they are run and managed by the J2EE server or client container.


J2EE module

A software unit that consists of one or more J2EE components of the same container type and one deployment descriptor of that type. There are four types of modules: EJB, Web, application client, and resource adapter. Modules can be deployed as stand-alone units or can be assembled into a J2EE application.

J2EE product

An implementation that conforms to the J2EE platform specification.

J2EE product provider

A vendor that supplies a J2EE product.

J2EE server

The runtime portion of a J2EE product. A J2EE server provides EJB or Web containers or both.

J2ME

See Java 2 Platform, Micro Edition.

J2SE

See Java 2 Platform, Standard Edition.

JAR

Java archive. A platform-independent file format that permits many files to be aggregated into one file.

Java 2 Platform, Enterprise Edition (J2EE)

An environment for developing and deploying enterprise applications. The J2EE platform consists of a set of services, application programming interfaces (APIs), and protocols that provide the functionality for developing multitiered, Web-based applications.

Java 2 Platform, Micro Edition (J2ME)

A highly optimized Java runtime environment targeting a wide range of consumer products, including pagers, cellular phones, screen phones, digital set-top boxes, and car navigation systems.

Java 2 Platform, Standard Edition (J2SE)

The core Java technology platform.

Java API for XML Processing (JAXP)

An API for processing XML documents. JAXP leverages the parser standards SAX and DOM so that you can choose to parse your data as a stream of events or to build a tree-structured representation of it. JAXP supports the XSLT standard, giving you control over the presentation of the data and enabling you to convert the data to other XML documents or to other formats, such as HTML. JAXP provides namespace support, allowing you to work with schema that might otherwise have naming conflicts.

Java API for XML Registries (JAXR)

An API for accessing various kinds of XML registries.

Java API for XML-based RPC (JAX-RPC)

An API for building Web services and clients that use remote procedure calls and XML.

Java IDL

A technology that provides CORBA interoperability and connectivity capabilities for the J2EE platform. These capabilities enable J2EE applications to invoke operations on remote network services using the Object Management Group IDL and IIOP.

Java Message Service (JMS)

An API for invoking operations on enterprise messaging systems.

Java Naming and Directory Interface (JNDI)

An API that provides naming and directory functionality.

Java Secure Socket Extension (JSSE)

A set of packages that enable secure Internet communications.

Java Transaction API (JTA)

An API that allows applications and J2EE servers to access transactions.

Java Transaction Service (JTS)

Specifies the implementation of a transaction manager that supports JTA and implements the Java mapping of the Object Management Group Object Transaction Service 1.1 specification at the level below the API.

JavaBeans component

A Java class that can be manipulated by tools and composed into applications. A JavaBeans component must adhere to certain property and event interface conventions.

JavaMail

An API for sending and receiving email.

JavaServer Faces Technology

A framework for building server-side user interfaces for Web applications written in the Java programming language.

JavaServer Faces conversion model

A mechanism for converting between string-based markup generated by JavaServer Faces UI components and server-side Java objects.

JavaServer Faces event and listener model

A mechanism for determining how events emitted by JavaServer Faces UI components are handled. This model is based on the JavaBeans component event and listener model.

JavaServer Faces expression language

A simple expression language used by a JavaServer Faces UI component tag attributes to bind the associated component to a bean property or to bind the associated component's value to a method or an external data source, such as a bean property. Unlike JSP EL expressions, JavaServer Faces EL expressions are evaluated by the JavaServer Faces implementation rather than by the Web container.

JavaServer Faces navigation model

A mechanism for defining the sequence in which pages in a JavaServer Faces application are displayed.

JavaServer Faces UI component

A user interface control that outputs data to a client or allows a user to input data to a JavaServer Faces application.

JavaServer Faces UI component class

A JavaServer Faces class that defines the behavior and properties of a JavaServer Faces UI component.

JavaServer Faces validation model

A mechanism for validating the data a user inputs to a JavaServer Faces UI component.

JavaServer Pages (JSP)

An extensible Web technology that uses static data, JSP elements, and server-side Java objects to generate dynamic content for a client. Typically the static data is HTML or XML elements, and in many cases the client is a Web browser.

JavaServer Pages Standard Tag Library (JSTL)

A tag library that encapsulates core functionality common to many JSP applications. JSTL has support for common, structural tasks such as iteration and conditionals, tags for manipulating XML documents, internationalization and locale-specific formatting tags, SQL tags, and functions.

JAXR client

A client program that uses the JAXR API to access a business registry via a JAXR provider.

JAXR provider

An implementation of the JAXR API that provides access to a specific registry provider or to a class of registry providers that are based on a common specification.

JDBC

An API for database-independent connectivity between the J2EE platform and a wide range of data sources.

JMS

See Java Message Service.

JMS administered object

A preconfigured JMS object (a resource manager connection factory or a destination) created by an administrator for the use of JMS clients and placed in a JNDI namespace.

JMS application

One or more JMS clients that exchange messages.

JMS client

A Java language program that sends or receives messages.

JMS provider

A messaging system that implements the Java Message Service as well as other administrative and control functionality needed in a full-featured messaging product.

JMS session

A single-threaded context for sending and receiving JMS messages. A JMS session can be nontransacted, locally transacted, or participating in a distributed transaction.

JNDI

See Java Naming and Directory Interface.

JSP

See JavaServer Pages.

JSP action

A JSP element that can act on implicit objects and other server-side objects or can define new scripting variables. Actions follow the XML syntax for elements, with a start tag, a body, and an end tag; if the body is empty it can also use the empty tag syntax. The tag must use a prefix. There are standard and custom actions.

JSP container

A container that provides the same services as a servlet container and an engine that interprets and processes JSP pages into a servlet.

JSP container, distributed

A JSP container that can run a Web application that is tagged as distributable and is spread across multiple Java virtual machines that might be running on different hosts.

JSP custom action

A user-defined action described in a portable manner by a tag library descriptor and imported into a JSP page by a taglib directive. Custom actions are used to encapsulate recurring tasks in writing JSP pages.

JSP custom tag

A tag that references a JSP custom action.

JSP declaration

A JSP scripting element that declares methods, variables, or both in a JSP page.

JSP directive

A JSP element that gives an instruction to the JSP container and is interpreted at translation time.

JSP document

A JSP page written in XML syntax and subject to the constraints of XML documents.

JSP element

A portion of a JSP page that is recognized by a JSP translator. An element can be a directive, an action, or a scripting element.

JSP expression

A scripting element that contains a valid scripting language expression that is evaluated, converted to a String, and placed into the implicit out object.

JSP expression language

A language used to write expressions that access the properties of JavaBeans components. EL expressions can be used in static text and in any standard or custom tag attribute that can accept an expression.

JSP page

A text-based document containing static text and JSP elements that describes how to process a request to create a response. A JSP page is translated into and handles requests as a servlet.

JSP scripting element

A JSP declaration, scriptlet, or expression whose syntax is defined by the JSP specification and whose content is written according to the scripting language used in the JSP page. The JSP specification describes the syntax and semantics for the case where the language page attribute is "java".

JSP scriptlet

A JSP scripting element containing any code fragment that is valid in the scripting language used in the JSP page. The JSP specification describes what is a valid scriptlet for the case where the language page attribute is "java".

JSP standard action

An action that is defined in the JSP specification and is always available to a JSP page.

JSP tag file

A source file containing a reusable fragment of JSP code that is translated into a tag handler when a JSP page is translated into a servlet.

JSP tag handler

A Java programming language object that implements the behavior of a custom tag.

JSP tag library

A collection of custom tags described via a tag library descriptor and Java classes.

JSTL

See JavaServer Pages Standard Tag Library.

JTA

See Java Transaction API.

JTS

See Java Transaction Service.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


keystore

A file containing the keys and certificates used for authentication.




A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (top)


life cycle (J2EE component)

The framework events of a J2EE component's existence. Each type of component has defining events that mark its transition into states in which it has varying availability for use. For example, a servlet is created and has its init method called by its container before invocation of its service method by clients or other servlets that require its functionality. After the call of its init method, it has the data and readiness for its intended use. The servlet's destroy method is called by its container before the ending of its existence so that processing associated with winding up can be done and resources can be released. The init and destroy methods in this example are callback methods. Similar considerations apply to the life cycle of all J2EE component types: enterprise beans, Web components (servlets or JSP pages), applets, and application clients.

life cycle (JavaServer Faces)

A set of phases during which a request for a page is received, a UI component tree representing the page is processed, and a response is produced. During the phases of the life cycle:


  • The local data of the components is updated with the values contained in the request parameters.



  • Events generated by the components are processed.



  • Validators and converters registered on the components are processed.



  • The components' local data is updated to back-end objects.



  • The response is rendered to the cl
    Posted by 아름프로

     


    Enterprise JavaBeans[tm] Technology vs. COM+/MTS - Industry Quotes and White Papers.
    Posted by 아름프로

    The secret's out: Java[tm] technology isn't just for programming applets which run on the client side in web browsers, or for writing Internet applications.

     



    JSP



    This page lists content under White Papers for JSP
    Posted by 아름프로
    White Paper

    J2EE Connector Architecture









      









     






    Integrating Java applications with existing Enterprise Applications





    Executive Summary



    The J2EE Connector Architecture, part of Java 2 Platform, Enterprise Edition (J2EE) 1.3, specifies a standard architecture for accessing resources in diverse Enterprise Information Systems (EIS). These may include ERP systems such as SAP R/3, mainframe transaction processing systems such as IBM CICS, legacy applications and non-relational database systems.


    Today, the JDBC Data Access API provides easy integration with relational database systems for Java applications. In a similar manner, the Connector Architecture simplifies integration of Java applications with heterogeneous EIS systems.


    This paper describes the 1.0 version of the Connector Architecture specification at a high level, including:


    • System contracts defined between the J2EE platform-based application server and the EIS resource adapter, providing security, connection pooling, and transaction management facilities.

    • Common Client Interface (CCI) between the EIS resource adapter and application components or tools.

    • Packaging and deployment of resource adapters in an application server environment.



    The Connector Architecture Specification document contains detailed information
    on the architecture. The specification and the latest information on the Connector Architecture can be found at http://java.sun.com/j2ee/connector.





    Introduction



    Most companies have enormous investments in Enterprise Information Systems (EISs) such as ERP systems, legacy systems, mainframe database and transaction processing systems. Today, leveraging these systems as part of a web-based, multi-tiered application is challenging. EIS vendors provide proprietary interfaces, with varying levels of support for enterprise application integration. Application server vendors have to build and maintain separate interfaces for different supported EISs, and application developers need to manage the system-level issues of security, transactions and connection pooling within the applications themselves.


    Challenges in EIS integration


    Integration with EISs presents many challenges.


    • The back end EISs are complex and heterogeneous. The application programming models vary widely between these systems, increasing the complexity and effort of application integration. Application development tools that can simplify these integration efforts are critical.

    • Transaction and security management add complexity to integration with back-end EIS systems.

    • The web-based architecture requires significant scalability in terms of the potential number of clients that can access enterprise applications.



    J2EE Platform and Connector Architecture


    The Connector Architecture addresses these challenges directly. The J2EE platform provides a reusable component model, using Enterprise JavaBeans and JavaServer Pages technologies to build and deploy multi-tier applications that are platform and vendor-independent. The J2EE platform shares the "Write Once, Run Anywhere" approach of the Java platform, and has significant industry support.



    The Connector Architecture adds simplified EIS integration to this platform. The goal is to leverage the strengths of the J2EE platform -- including component models, transaction and security infrastructures -- to address the challenges of EIS integration.


    The Connector Architecture defines a common interface between application servers and EIS systems, implemented in EIS-specific resource adapters that plug into application servers. The result is simplified enterprise application integration, using a scalable, standard architecture that leverages the benefits of the J2EE platform.




    Figure: Using the Connector Architecture, each EIS only writes one Connector Architecture-compliant resource adapter, and each application server extends its system once to support integration with any number of EIS resource adapters and underlying EISs.


    Developing a standard contract between application components, application servers and EISs reduces the overall scope of the integration effort. This delivers benefits for the entire Java development community:


    • EIS vendors only need to create one, open interface (implemented in the resource adapter) to the EIS. This resource adapter can be used with any compliant J2EE application server, and provides a standard interface for tool and Enterprise Application Integration (EAI) vendors. Maintaining a single interface reduces the development effort for the EIS vendor, who today must build point solutions targeted at individual vendors and certify individual systems for compliance.



    • Application Servers vendors (vendors of any compliant J2EE servers) only need to extend their systems once to support the system contracts defined by the Connector Architecture. Then they can simply plug in multiple resource adapters to extend the server, supporting integration with multiple EISs without any EIS-specific system-level programming.



    • Enterprise Application Integration (EAI) and development tool vendors use the Common Client Interface (CCI) to simplify access to EIS resources with a standard application-level interface.



    • Application component developers are shielded from the complexity of transaction, connection and security concerns when accessing data or functions in EISs and can concentrate instead on developing business and application logic.



    The Connector Architecture is the product of the Java Community Process program, with the contributions of a wide range of tool, server, and EIS vendors.


    Connector Architecture Overview



    The Connector Architecture is implemented in an application server and an EIS-specific resource adapter. A resource adapter is a system library specific to an EIS and provides connectivity to the EIS. A resource adapter is analogous to a JDBC driver. The interface between a resource adapter and the EIS is specific to the underlying EIS; it can be a native interface.


    The Connector Architecture has three main components:


    • System level contracts between the resource adapter and the application server.

    • Common Client Interface that provides a client API for Java applications and development tools to access the resource
      adapter.

    • Standard packaging and deployment facility for resource adapters.



    Figure: The Connector Architecture defines system contracts between the application server and the resource adapter. It also defines a client API between the resource adapter and application components. These contracts are described further in this paper.


    The containers in an application server can be web containers (hosting JSP pages and Servlets) and EJB containers. The application server provides a set of services in an implementation-specific way. These services include transaction manager, security services manager and connection pooling mechanism. The Connector Architecture does not define how an application server implements these different services.


    • The application server vendor extends the server to support the system contracts defined by the Connector Architecture. The system contracts can be considered as a Service Provider Interface (SPI). The SPI provides a standard way for extending a container to support connectivity to multiple EISs.



    • The EIS vendor (or a third party ISV) creates a resource adapter for the EIS, using an EIS-specific interface to interact with the EIS itself and supporting the system contracts for the application server. In addition, the resource adapter provides a client API called the Common Client Interface, or CCI, defined by the Connector Architecture. Application development tools or application components use the Common Client Interface to interact with the resource adapter directly.



    The resource adapter runs in the application server's address space and manages access to the EIS resources. A Connector Architecture-compliant resource adapter will work with any compliant J2EE server.


    System-Level Contracts



    The 1.0 version of the Connector Architecture provides the following system contracts, to be implemented by the resource adapter and the application server:


    • Connection management

    • Transaction management

    • Security management




    For the 1.0 release these contracts cover the most pressing concerns for enterprise application integration: transactions, security, and scalability. In future versions of the specification, the system contracts will be extended to include support for Java Message Service (JMS) pluggability and thread management. The JMS pluggability will add support for asynchronous message-based communication.


    The following sections offer brief overviews of these contracts.


    Connection Management Contract


    A connection to an EIS is an expensive system resource. To support scalable applications, an application server needs to pool connections to the underlying EISs. This connection pooling mechanism should be transparent to applications accessing the underlying EIS, simplifying application development.


    The Connection Management contract supports connection pooling and management, optimizing application performance and increasing scalability. The Connection Management contract is defined between an application server and a resource adapter. It provides support for an application server to implement its connection pooling facility. The application server structures and implements its connection pool in an implementation specific way - the pool can be very primitive or advanced depending on the quality of services offered by the application server.


    The application server uses the connection management contract to:


    • create new connections to an EIS

    • configure connection factories in the JNDI namespace

    • find the right connection from an existing set of pooled connections



    The connection management contract enables an application server to hook in its services, such as transaction management and security management.


    Transaction Management Contract



    Transactional access to EISs is an important requirement for business applications. The Connector Architecture supports the concept of transactions - a number of operations that must be committed together or not at all for the data to remain consistent and to maintain data integrity.


    In many cases, a transaction (termed local transaction) is limited in scope to a single EIS system, and the EIS resource manager itself manages such transaction. While an XA transaction (or global transaction) can span multiple resource managers. This form of transaction requires transaction coordination by an external transaction manager, typically bundled with an application server. A transaction manager uses a two-phase commit protocol to manage a transaction that spans multiple resource managers (EISs). It uses one-phase commit optimization if only one resource manager is participating in an XA transaction.


    The connector architecture defines a transaction management contract between an application server and a resource adapter (and its underlying resource manager). The transaction management contract extends the connection management contract and provides support for management of both local and XA transactions.The transaction management contract has two parts, depending on the type of transaction.


    • JTA XAResource based contract between a transaction manager and an EIS resource manager

    • Local transaction management contract




    These contracts enable an application server to provide the infrastructure and runtime environment for transaction management. Application components rely on this transaction infrastructure to support the component-level transaction model.


    Because EIS implementations are so varied, the transactional support must be very flexible. The Connector Architecture imposes no requirements on the EIS for transaction management. Depending on the implementation of transactions within the EIS, a resource adapter may provide:


    • No transaction support at all - this is typical of legacy applications and many back-end systems.

    • Support for only local transactions

    • Support for both local and XA transactions



    An application server is required to support all three levels of transactions. This ensures that application servers can support EISs at different transaction levels.


    Security Contract


    It is critical that an enterprise be able to depend on the information in its EIS for its business activities. Any loss or inaccuracy of information or any unauthorized access to the EIS can be extremely costly to an enterprise. There are mechanisms that can be used to protect an EIS against such security threats, including:


    • Identification and authentication of principals (human users) to verify they are who they claim to be.

    • Authorization and access control to determine whether a principal is allowed to access an application server and/or an EIS.

    • Security of communication between an application server and an EIS. Communication over insecure links can be protected using a protocol (for example, Kerberos) that provides authentication, integrity, and confidentiality services. Communication can also be protected by using a secure links protocol (for example, SSL).




    The Connector Architecture extends the J2EE security model to include support for secure connectivity to EISs.


    The security management contract is defined to be independent of security mechanisms and technologies. This enables application servers and EISs with different levels of support for security technology to support the security contract. For example, the security management contract can support basic user-password based authentication or a Kerberos-based end-to-end security environment. It can also support EIS-specific security mechanisms.


    EIS Sign-on


    Creating a new physical connection requires a sign-on to an EIS. Changing the security context on an existing physical connection can also require EIS sign-on; the latter is termed re-authentication. An EIS sign-on typically involves one or more of the following steps:


    • Determining a security principal under whose security context a physical connection to an EIS will be established.

    • Authentication of a security principal if it is not already authenticated.

    • Establishing a secure association between the application server and the EIS. This enables additional security mechanisms (example, data confidentiality and integrity) to be applied to communication between the two entities.

    • Access control to EIS resources




    The Connector Architecture supports single sign on across multiple EISs. Single sign-on capabilities are useful in applications that need access to resources in multiple EIS systems. For example, an employee self-service application can give employees access to HR and Payroll records with a single sign on.


    The Security Contract extends the Connection Management contract to support EIS sign re-authentication of pooled connections as necessary.


    Common Client Interface



    The CCI defines a standard client API for application components. The CCI enables application components and Enterprise Application Integration (EAI) frameworks to drive interactions across heterogeneous EISs using a common client API.


    The target users of the CCI are enterprise tool vendors and EAI vendors. Application components themselves may also write to the API, but the CCI is a low-level API. The specification recommends that the CCI be the basis for richer functionality provided by the tool vendors, rather than being an application-level programming interface used by most application developers.


    Challenges of Client Tool Integration



    The heterogeneity challenges of EIS/application server integration also hold true for enterprise application development tools vendors and EAI frameworks. Typically, EISs provide proprietary client APIs. An application development tool or EAI framework needs to adapt these different client APIs to a higher abstracted layer. This abstracted layer raises the API to a common level on which tools and EAI vendors build useful functionality.




    The CCI solves this problem by providing an API that is common across heterogeneous EISs. This avoids the need for tool and EAI vendors to adapt diverse EIS-specific client APIs. These vendors can use the CCI to build higher-level functionality over the underlying EISs.


    Common Client Interface


    The CCI defines a remote function-call interface that focuses on executing functions on an EIS and retrieving the results.The CCI is independent of a specific EIS; for example: data types specific to an EIS. However, the CCI is capable of being driven by EIS-specific metadata from a repository.


    The CCI enables applications to create and manage connections to an EIS, execute an interaction, and manage data records as input, output or return values. The CCI is designed to be toolable, leveraging the JavaBeans architecture and Java Collection framework.


    The 1.0 version of the Connector Architecture recommends that a resource adapter support CCI as its client API, while it requires that the resource adapter implement the system contracts. A resource adapter may choose to have a client API different from CCI, such as the client API based on the JDBC API.


    JDBC API and Connectors



    The relationship between the JDBC API and Connectors should be understood from the perspectives of application contract and system contracts.



    • The JDBC API defines a standard client API for accessing relational databases, while the CCI defines an EIS-independent client API for EISs that are not relational databases. The JDBC API is the recommended API for accessing relational databases while CCI is the recommended client API for other types of EISs.



    • At the system contract level, the connector SPIs may be viewed as a generalization and enhancement of JDBC 2.0 contracts.
      Future JDBC specifications may align with the Connector SPIs by offering it as an option with JDBC 2.0 SPIs. Another option for application server vendors is to wrap JDBC drivers under the Connector system contracts.



    Packaging and Deployment


    The Connector Architecture provides packaging and deployment interfaces, so that various resources adapters can easily plug into compliant J2EE application servers in a modular manner.




    A resource adapter provider develops a set of Java interfaces and classes as part of its implementation of a resource adapter. These Java classes implement Connector Architecture-specified contracts and EIS-specific functionality provided by the resource adapter. The development of a resource adapter can also require use of native libraries specific to the underlying EIS.


    The Java interfaces and classes are packaged together (with required native libraries, help files, documentation, and other resources) with a deployment descriptor to create a Resource Adapter Module. A deployment descriptor defines the contract between a resource adapter provider and a deployer for the deployment of a resource adapter.


    A resource adapter module may be deployed as a shared, stand-alone module or packaged as part of a J2EE application.
    During deployment, the deployer installs a resource adapter module on an application server and then configures it into the target operational environment. The configuration of a resource adapter is based on the properties defined in the deployment descriptor as part of the resource adapter module.


    Connector Architecture and Enterprise Application Integration



    To illustrate the potential benefits of the Connector Architecture, this section provides two scenarios from different
    perspectives.


    Integrating an EIS with Multiple Tools and Servers



    A software vendor provides an ERP system focused on mid-sized manufacturing companies. Its customers are starting to build multi-tier, Java applications and want to build tightly coupled integration between these applications and the vendor's ERP system.


    Although the ERP vendor may publish an API, not all of the application server vendors support it. Also the ERP system's customers are using a number of different application servers, presenting a potential logistical problem for the ERP vendor.


    Instead of building or certifying interfaces for each system, the ERP vendor creates a single resource adapter using the Connector Architecture. This resource adapter implements Connector Architecture specified connection, security, and transaction contracts. The vendor then makes this adapter available, and informs customers that they can work with any compliant J2EE application server. The ERP vendor also implements the CCI in its resource adapter, opening up access directly to client components, or to a wide range of application development or EAI tools.


    Using the Connector Architecture significantly reduces the ERP vendor's development efforts, giving it immediate integration
    and consistent operations with a wide variety of compliant J2EE tools and application servers


    Business-to-Business Commerce Solution



    The following scenario illustrates the use of the Connector Architecture in a Business-to-Business supply chain solution.


    Wombat Corporation is a manufacturer implementing a B2B e-commerce solution that improves interactions with its suppliers. Like most manufacturers, Wombat has enormous investments in its existing EIS systems, including an ERP system and a mainframe transaction processing system.


    Wombat buys a compliant J2EE application server (called B2B server in this example) that supports interactions with multiple buyers/suppliers using XML and HTTP/HTTPS. Wombat integrates access to its EIS systems using off-the-shelf resource adapters that plug into the B2B server. Wombat can deploy as many resource adapters as it has EISs to integrate. An application server can "plug in" multiple resource adapters for the systems required for an application.


    This scenario illustrates an important point: the Connector Architecture is designed for creating tightly coupled integration -- typically integration within the enterprise. Operations between different companies, such as a manufacturer and its supplier, are generally loosely coupled; for this, XML messaging is more appropriate.


    The Evolution of the Connector Architecture



    The Connector Architecture is a product of the Java Community Process program, an open process used by the Java community to develop and revise Java technology and specifications. Sun's partners in the Connector effort are EIS vendors, development tool vendors, and EAI vendors. Key participants include BEA, Fujitsu, IBM, Inline, Inprise, iPlanet, Motorola, Oracle, SAP, Sybase, Tibco, and Unisys. To date the standard experiences strong industry support, as all stakeholders stand to benefit from its adoption.


    The 1.0 release of the J2EE Connector Architecture specification does not address all of the interfaces and system contracts that could potentially be required. The goal of the 1.0 release was to address the most pressing needs in a way that would speed industry adoption of the standard. For example, the 1.0 release specifies three system-level contracts, described above. These are mandatory components of the interface. Future system level contracts may address thread management and messaging through Java Message Service (JMS).


    The Common Client Interface (CCI) is optional in the 1.0 implementation. The Connector Architecture does not address the issue of meta data for data representation and type mapping, which will certainly be relevant in the use of CCI. This issue will be addressed in future versions.





    Summary


    Just as the JDBC API extended the Java platform to integrate relational databases, the Connector Architecture extends the J2EE platform to integrate and extend the EISs that manage valuable processes and data in the enterprise. The Connector Architecture enables scalable, simplified access to valuable enterprise resources, without compromising data integrity or security on the EISs.












    copyright © Sun Microsystems, Inc 

     



Posted by 아름프로

J2EE v1.4 Specifications

2005. 2. 11. 08:31
Posted by 아름프로

Version 1.5.0 or 5.0?

2005. 2. 10. 22:17

    

Version 1.5.0 or 5.0?




Both version numbers "1.5.0" and "5.0" are used to identify this
release of the Java 2 Platform Standard Edition.
Version "5.0" is the product version, while
"1.5.0" is the developer version.
The number "5.0" is used to better reflect the level of maturity,
stability, scalability and security of the J2SE.  


The number "5.0" was arrived at by dropping the leading "1."
from "1.5.0".   Where you might have expected to see 1.5.0, it
is now 5.0 (and where it was 1.5, it is now 5).



"Version 5.0" Used in Platform and Product Names



Version 5.0 is used in the platform and product names as given
in this table:


        

        
        


        
        

        




        

        
        


        
        

Full Name                                                  Abbreviation                                              
Platform name JavaTM 2 Platform Standard Edition 5.0 J2SETM 5.0        
Products delivered
under the platform
J2SETM Development Kit 5.0     JDKTM 5.0          
J2SETM Runtime Environment 5.0 JRE 5.0



Due to significant popularity within the Java developer community,
the development kit has reverted back to the name "JDK" from "Java 2 SDK" (or "J2SDK"),
and the runtime environment has reverted back to "JRE" from "J2RE".
Notice that "JDK" stands for "J2SE Development Kit" (to distinguish
it from the J2EE Development Kit).  The name "Java Development Kit"
is no longer used, and has not been offically used since 1.1, prior
to the advent of J2EE and J2ME.



As before, the "2" in Java 2 Platform Standard Edition indicates the
2nd generation Java platform, introduced with J2SE 1.2.  This generation
number is also used with J2EE and J2ME.



"Version 1.5.0" Used by Developers




J2SE also keeps the version number 1.5.0 (or 1.5) in some places that
are visible only to developers, or where the version number
is parsed by programs.  As mentioned, 1.5.0 refers to exactly the same
platform and products numbered 5.0.  Version numbers 1.5.0 and 1.5 are used at:



      
  • java -version  (among other info, returns  java version "1.5.0")
      
  • java -fullversion  (returns  java full version "1.5.0-b64")
      
  • javac -source 1.5  (javac -source 5  also works)
      
  • java.version  system property
      
  • java.vm.version  system property
      
  • @since 1.5  tag values
      
  • jdk1.5.0  installation directory
      
  • jre1.5.0  installation directory
      
  • http://java.sun.com/j2se/1.5.0  website (http://java.sun.com/j2se/5.0  also works)





Also see:





Posted by 아름프로


JavaTM 2 Platform Standard Edition 5.0 Development Kit (JDK 5.0).
Posted by 아름프로
Comparison of ebXML and RosettaNet
관련 문서
Posted by 아름프로
상세정보는 링크를 참조..

mySAP Supply Chain Management: A Quick, Complete Solution to Connect and Empower Your Organization
In an age of intense competition, supply chain efficiency isn't just a requirement for success. It's a necessity for survival.

mySAP Supply Chain Management (mySAP SCM) can help your organization transform a linear supply chain into an adaptive supply chain network, allowing you to access the knowledge and resources of your peers, adjust intelligently to changing market conditions, and remain customer-focused. Giving your company a competitive edge.

Many other companies have also used mySAP SCM to improve their business and operations processes. In fact, mySAP SCM is the only complete supply chain solution that empowers companies to adapt their supply chain processes to an ever-changing competitive environment.

mySAP SCM enables adaptive supply chain networks by providing companies with planning and execution capabilities for managing enterprise operations, as well as coordination and collaboration technology to extend those operations beyond corporate boundaries. As a result, companies achieve measurable and sustainable improvements through cost reductions, service-level increases, and productivity gains -- ultimately leading to stronger profit margins.

Posted by 아름프로
SAP Business Maps

Cross-Industry Business Maps -- Supply Chain Management


한눈에 볼 수 있는 최신 자료입니다.
Posted by 아름프로
SAP NetWeaver의 내용중..
Open Standard 에 대한 상세 내역 부분입니다.
분야별 표준설정상에서 참고해야할 곳이 다 언급되어져 있습니다.

===========================================
SAP NetWeaver
Technical Details -- Open Standards
...
...
...
What's more, SAP supports industry data exchange standards such as HL7 (healthcare), papiNet (mill products), PIDX (oil and gas), and UCCnet (retail and consumer products). SAP is also helping to drive enhanced compatibility of applications and business processes through standards initiatives such as ACORD (insurance), AIAG (automotive), CWM (business intelligence), GCI (retail and consumer products), HR-XML (human resources), OPC (process industries), SPEC2000, (aerospace and defense), S.W.I.F.T. (banking), TWIST (treasury), VICS (supply chain management), and XBRL (accounting).

SAP is also a major contributor to United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) Core Components and Universal Business Language (UBL). Broad adoption of these specifications promises to improve interoperability of IT systems and software applications, especially across industries. These specifications will pave the way for next-generation, XML-based technology standards.

Posted by 아름프로
http://propedit.sourceforge.jp 의 에디터를 이용..
이클립스와 jbuilder plug-in 형식으로 제공됩니다.

이클립스의 설치 방법은..
=================================
[ INSTALLATION ]




            
  1. Please choose from the screen of Eclipse with "Help" ->"Software Updates" -> "Update Manager". An 'Update Manager' opens.


            
  2. In the "Feature Updates" view at the lower left of an 'Update Manager', please carry out the right click of the "Sites to Visit", and create a site bookmark by "New" -> "Site Bookmark...".

                    - The bookmark to create should input the following "URL" and should push an "Finish" button.

                    Name: Arbitrary input

                    URL(for Eclipse2.1.x) : http://propedit.sourceforge.jp/eclipse_plugins/2.1/

                    URL(for Eclipse3.0) : http://propedit.sourceforge.jp/eclipse_plugins/3/

                    Bookmark type: Eclipse update site


            
  3. If a site bookmark is created, the bookmark created at the bottom of "Feature Updates" will appear.

                    A click of "jp.gr.java_conf.ussiy.app.propedit.eclipse.feature.PropertiesEditorFeature x.x.x" displays a preview on a right window. Since the button "Install Now" is in around the lower right, please click.


            
  4. Since an installation wizard starts, please click a "Next" button rapidly.


            
  5. "You will need to restart the workbench for the changes to take effect. Would you like to restart now?" is displayed. Please reboot Eclipse according to a dialog.


Posted by 아름프로
getResource의 경우..
jar 로 만들시에 문제가 발생할 수 있습니다.

다음 내용을 참조하세요..

http://java.sun.com/docs/books/tutorial/uiswing/misc/icon.html#getresource
Posted by 아름프로

[eclipse_home]configurationorg.eclipse.ui.ide
ecentWorkspaces.xml
Posted by 아름프로
"아마추어는 문제를 복잡하게 만들지만, 프로는 명쾌함과 간결함을 추구합니다."

닛산 자동차의 최고경영자 카를로스 곤 사장의 학창 시절에 깊은 인상을 남겼던 S.J. 라그로뵐 신부의 말이다.
내가 여러분에게 권하고 싶은 점도 바로 명쾌함과 간결함이다. 모든 종류의 커뮤니케이션의 기본은 명쾌함과 간결함이다. 보고서 작성에서 지켜야 할 첫째 원칙은 명료함과 간결함을 유지하라는 것이다.

그러면 명쾌함과 간결함은 언제 나오는가? 사고가 정리되어 있어야 한다. 여러분들이 전달하고자 하는 메시지를 '간결하고 날렵하게' 전달하려면 평소에 사물이나 현상을 단순화시키는 능력을 꾸준히 갈고 닦아야 한다.
특히 여러분의 보고서를 받아보기를 원하는 사람들은 대부분 시간에 쫓기는 사람들일 가능성이 높다. 그들은 여러분의 보고서를 통해서 짧은 시간 안에 유용한 핵심 포인트를 간파하기를 바란다. 그런 고객들의 요구를 만족시킬 수 있으냐에 따라 보고서의 승패가 결정된다.

두 번째 원칙은 고객감동이다. 고객이 이 보고서에서 기대하는 것이 정확하게 무엇인지를 알아야 한다. 만일 여러분이 보고서 작성을 주문 받았을 때, 권하고 싶은 방법 가운데 하는 '1인칭 기법'을 활용하는 일이다. 여러분 자신이 보고서를 주문한 고객(상사)이라 가정하고 상상해 보라. 아마도 고객이 무엇을 원하는지를 정확하게 파악할 수 있을 것이다.
내가 이야기하고 싶은 핵심 포인트는 보고서의 목적과 용도를 분명히 하라는 것이다. 아무리 오랜 시간을 정성을 들여서 작성한 보고서라 하더라도 고객의 요구를 정확히 만족시키지 못하면 소용없는 일이다. 항상 보고서의 목적으로부터 벗어나지 않아야 한다.

셋째, 첫 페이지에서 요점을 정리해 주라. 일반적으로 'Summary' 부분을 앞에 넣는 것이 좋겠다. 'One Page Proposal'을 기억해 둘 필요가 있다. 보고서 전체 내용인 단 한 페이지로 정확하게 요약될 수 있도록 해야 한다.
보고서는 읽는 사람의 입장에서 바쁜 경우 한 페이지만으로도 충분히 핵심을 파악할 수 있도록 해야 한다. 중언부언 끝에 '이렇다. 저렇다'는 결론이 나중에 나오도록 할 것이 아니라, 보고서의 첫 페이지에서 결론이나 선택의 범위를 정확하게 제시하는 것이 좋다.

넷째, 보고서는 실용성을 지녀야 한다. 상사가 보고서를 맡길 때는 평균적인 기대 수준이 있다. 상사가 보고서를 요구할 때는 자신의 기존 지식이나 고정관념에 바탕을 둔 잠정적인 결론을 갖고 있는 경우가 많다. 실제로 그들은 여러분의 보고서를 통해서 자신이 본능적으로 올바르다고 생각하는 것을 검증하기를 내심 바라고 있을지도 모른다.
GE의 잭 웰치 전 회장은 오랜 회사 경험을 토대로 신입 직원들에게 이런 조언을 하고 있다.

"여러분들이 높이 오르고 싶으면 자신의 생각과 시간을 그 이상으로 나아가야 한다. 상사가 운행하?생각의 열차에 부가 가치를 더해 주는 것이다."
특별한 부가가치란 기대 수준을 휠씬 뛰어넘는 'Something Special'한 정보나 지식 그리고 제안을 담을 수 있어야 한다는 말이다. 결국 보고서 작성을 주어진 일을 한다고 생각해서는 이렇게 할 수 없다. 기대를 뛰어넘는 보고서는 열성을 받쳐서 자신의 비즈니스를 한다고 믿을 때 가능한 일이다.

다섯째, 보고서는 믿음과 자신만의 컬러를 담고 있어야 한다. 주관적인 제안이나 주장들을 입증할 만한 객관적인 숫자들이 풍부하게 포함되어 있어야 한다. 숫자나 사례가 여러분의 제안을 충분히 입증할 수 있도록 해야 한다. 보고서에 믿음을 주기 위해선 가능한 객관적인 사실을 숫자로 표기하거나 실제 경험이 들어가야 한다.
동시에 자신만의 컬러를 지니기 위해서는 기대 효과나 여러분 자신의 의견 즉 "따라서 나의 제안은...." "이런 저런 제안들 가운데서 우리가 선택할 수 있는 우선 순위는 ..." 등과 같은 내용을 포함해야 한다. 상사가 보고서를 통해서 의사 결정을 내릴 수 있는 몇 가지 대안을 제시할 수 있다면 무척 유용할 것이다. 언제나 자신의 생각을 세워두는 일은 보고서 작성에서도 큰 효과를 발휘하게 될 것이다.

마지막으로 보고서도 상품이라고 생각해야 한다. 수많은 보고서들 가운데서 고객의 눈길을 끌만한 내용과 포장으로 마케팅에서 승리하기 위해 무엇을 어떻게 해야 할 것인가를 늘 생각해야 한다. 구성, 형식, 내용, 컬러 등을 어떻게 포장해야 하는 가도 중요해야 한다.
Posted by 아름프로
아래와 같이...
별도의 bat 파일을 작성한다.. ㅡㅡ;;
tomcat 버젼별로 개발할 시 도움..
===========================

set CATALINA_HOME=../
call ./startup.bat
Posted by 아름프로
윈도우 다른 위치로 재설치시의 데몬 위치 변경방법입니다.

윈도우에 기존의 mysql이 설치가 되어 있다면..
서비스에 MySQL 올라와 있습니다.
그런데 이것은 프로그램삭제를 하고 my.ini를 바꾸고 생 난리를 해도 지워지지
않습니다.

mysql.com에서.. noinstall 버젼을 받아서..
bin 디렉토리에서...

mysqld-nt --remove
를 한번 해주시고.. 아래를 실행하면 변경된 디렉토리로
인식을 합니다.
mysqld-nt --install

끝!!~ ^^
Posted by 아름프로
mysql의 desc의 결과 내용

Field      |       Type     |       Null     |     Key     |  ....
===================================
             |                    |                  |                |


---------------------------------------------------

select t1.table_name, t1.column_name, t1.data_type, t2.pk, t2.nval
from   (
          select table_name,column_name,sum(pk) pk,sum(nullvalue) nval
          from   (
                    select distinct t2.table_name,t1.column_name,
                        decode(t2.constraint_type,'P',1,0) pk,
                        decode(t2.constraint_type,'C',1,0) NUllvalue
                    from   user_cons_columns t1,user_constraints t2
                    where  t2.constraint_type in ('P','C')
            and    t2.constraint_name = t2.constraint_name
            and    t2.table_name = t1.table_name
            and    t1.owner = t2.owner
            and    t1.table_name = '해당테이블'
            --and    t2.owner = '테이블 유저'
            order by t2.table_name,t1.column_name
          )
          group by table_name,column_name
            ) t2,
            (
            select table_name, column_name, data_type
            from   user_tab_cols
            where  table_name = '해당테이블'
            ) t1
where  t1.column_name = t2.column_name(+)
and    t1.table_name = t2.table_name(+)
Posted by 아름프로

select
a.table_name,
b.column_name
from user_constraints a, user_cons_columns b
where a.constraint_type = 'P'
and a.constraint_name = b.constraint_name
and a.table_name = b.table_name
and a.table_name='MAP_USER_MENU'
Posted by 아름프로

BLOG main image

카테고리

분류 전체보기 (539)
이야기방 (19)
토론/정보/사설 (16)
IBM Rational (9)
U-IT (0)
SOA/WS/ebXML (110)
개발방법론/모델링 (122)
J2SE (34)
J2EE (60)
DataBase (39)
Open Projects (30)
BP/표준화 (50)
Apache Projects (15)
Web/보안/OS (22)
Tools (7)
AJAX/WEB2.0 (1)
Linux/Unix (1)
영어 (0)
비공개방 (0)

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

달력

«   2024/05   »
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

글 보관함

Total :
Today : Yesterday :